Test Report: QEMU_macOS 19313

                    
                      761b7fc65973460b6ca8311b028efa5f69b15d0b:2024-07-22:35453
                    
                

Test fail (94/278)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.46
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 9.81
55 TestCertOptions 10.02
56 TestCertExpiration 195.33
57 TestDockerFlags 10.14
58 TestForceSystemdFlag 9.88
59 TestForceSystemdEnv 11.21
104 TestFunctional/parallel/ServiceCmdConnect 27.08
176 TestMultiControlPlane/serial/StopSecondaryNode 312.3
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 225.14
178 TestMultiControlPlane/serial/RestartSecondaryNode 305.25
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 332.62
181 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
183 TestMultiControlPlane/serial/StopCluster 143.91
186 TestImageBuild/serial/Setup 9.85
189 TestJSONOutput/start/Command 9.69
195 TestJSONOutput/pause/Command 0.08
201 TestJSONOutput/unpause/Command 0.04
218 TestMinikubeProfile 10.02
221 TestMountStart/serial/StartWithMountFirst 9.84
224 TestMultiNode/serial/FreshStart2Nodes 9.85
225 TestMultiNode/serial/DeployApp2Nodes 98.59
226 TestMultiNode/serial/PingHostFrom2Pods 0.08
227 TestMultiNode/serial/AddNode 0.08
228 TestMultiNode/serial/MultiNodeLabels 0.06
229 TestMultiNode/serial/ProfileList 0.07
230 TestMultiNode/serial/CopyFile 0.06
231 TestMultiNode/serial/StopNode 0.13
232 TestMultiNode/serial/StartAfterStop 57.27
233 TestMultiNode/serial/RestartKeepsNodes 8.81
234 TestMultiNode/serial/DeleteNode 0.1
235 TestMultiNode/serial/StopMultiNode 1.92
236 TestMultiNode/serial/RestartMultiNode 5.26
237 TestMultiNode/serial/ValidateNameConflict 20.03
241 TestPreload 9.91
243 TestScheduledStopUnix 9.87
244 TestSkaffold 13.03
247 TestRunningBinaryUpgrade 613.24
249 TestKubernetesUpgrade 18.53
262 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.81
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.3
265 TestStoppedBinaryUpgrade/Upgrade 579.23
267 TestPause/serial/Start 9.78
277 TestNoKubernetes/serial/StartWithK8s 9.93
278 TestNoKubernetes/serial/StartWithStopK8s 5.27
279 TestNoKubernetes/serial/Start 5.3
283 TestNoKubernetes/serial/StartNoArgs 5.32
285 TestNetworkPlugins/group/auto/Start 9.73
286 TestNetworkPlugins/group/calico/Start 9.81
287 TestNetworkPlugins/group/custom-flannel/Start 9.73
288 TestNetworkPlugins/group/false/Start 9.96
289 TestNetworkPlugins/group/kindnet/Start 9.91
290 TestNetworkPlugins/group/flannel/Start 9.82
291 TestNetworkPlugins/group/enable-default-cni/Start 9.74
292 TestNetworkPlugins/group/bridge/Start 9.72
293 TestNetworkPlugins/group/kubenet/Start 9.86
296 TestStartStop/group/old-k8s-version/serial/FirstStart 9.83
297 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
301 TestStartStop/group/old-k8s-version/serial/SecondStart 5.25
302 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
303 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
305 TestStartStop/group/old-k8s-version/serial/Pause 0.1
307 TestStartStop/group/no-preload/serial/FirstStart 9.93
309 TestStartStop/group/embed-certs/serial/FirstStart 10.4
310 TestStartStop/group/no-preload/serial/DeployApp 0.1
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.15
314 TestStartStop/group/embed-certs/serial/DeployApp 0.09
315 TestStartStop/group/no-preload/serial/SecondStart 5.27
316 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.13
319 TestStartStop/group/embed-certs/serial/SecondStart 5.26
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
321 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
322 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
323 TestStartStop/group/no-preload/serial/Pause 0.1
325 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.81
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
329 TestStartStop/group/embed-certs/serial/Pause 0.1
331 TestStartStop/group/newest-cni/serial/FirstStart 9.88
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.24
341 TestStartStop/group/newest-cni/serial/SecondStart 5.26
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
343 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
349 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (18.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-521000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-521000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (18.453675583s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7e63ab1a-a1d6-41c7-8157-8eac01bdf0f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-521000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f0948ed-59f8-4d29-97d1-33c1cb296809","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19313"}}
	{"specversion":"1.0","id":"e70c5dfc-d944-49cc-92b9-d032d53cba07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig"}}
	{"specversion":"1.0","id":"6a03f268-5572-4749-a466-ef59b08a2456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"4030987a-e07a-49b6-8859-ca0b81da6b8f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d743f45d-af68-4d9f-b2b1-41f65608331f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube"}}
	{"specversion":"1.0","id":"35c3d513-9ab3-484f-b2c8-951acb94eb85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"c0eafe52-9397-4d1a-bb4f-1742f898cffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce07e7f1-ddb9-4158-8828-820452a62d20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"4e540e83-a200-40f6-b36a-2d8af71bbd34","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d72025dd-1bd0-4410-b947-c05e128bb71e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-521000\" primary control-plane node in \"download-only-521000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"eba1252b-e23e-4dd1-8cbc-48379cb66753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"868b2844-d4aa-4fda-a54e-8c42d03e6521","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60] Decompressors:map[bz2:0x1400080cda0 gz:0x1400080cda8 tar:0x1400080cd30 tar.bz2:0x1400080cd40 tar.gz:0x1400080cd70 tar.xz:0x1400080cd80 tar.zst:0x1400080cd90 tbz2:0x1400080cd40 tgz:0x14
00080cd70 txz:0x1400080cd80 tzst:0x1400080cd90 xz:0x1400080cdb0 zip:0x1400080cde0 zst:0x1400080cdb8] Getters:map[file:0x14000791600 http:0x1400098a280 https:0x1400098a2d0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"b7e4fb1b-a026-4e05-958d-812307bdf5d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:28:11.820631    1620 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:28:11.820788    1620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:11.820792    1620 out.go:304] Setting ErrFile to fd 2...
	I0722 03:28:11.820794    1620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:11.820915    1620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	W0722 03:28:11.820992    1620 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19313-1127/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19313-1127/.minikube/config/config.json: no such file or directory
	I0722 03:28:11.822254    1620 out.go:298] Setting JSON to true
	I0722 03:28:11.839573    1620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1660,"bootTime":1721642431,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 03:28:11.839647    1620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:28:11.846652    1620 out.go:97] [download-only-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 03:28:11.846778    1620 notify.go:220] Checking for updates...
	W0722 03:28:11.846786    1620 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball: no such file or directory
	I0722 03:28:11.849668    1620 out.go:169] MINIKUBE_LOCATION=19313
	I0722 03:28:11.852744    1620 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:28:11.857683    1620 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 03:28:11.860712    1620 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:28:11.863708    1620 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	W0722 03:28:11.869667    1620 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 03:28:11.869871    1620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:28:11.875808    1620 out.go:97] Using the qemu2 driver based on user configuration
	I0722 03:28:11.875832    1620 start.go:297] selected driver: qemu2
	I0722 03:28:11.875836    1620 start.go:901] validating driver "qemu2" against <nil>
	I0722 03:28:11.875939    1620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 03:28:11.879613    1620 out.go:169] Automatically selected the socket_vmnet network
	I0722 03:28:11.886494    1620 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0722 03:28:11.886581    1620 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 03:28:11.886648    1620 cni.go:84] Creating CNI manager for ""
	I0722 03:28:11.886665    1620 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0722 03:28:11.886729    1620 start.go:340] cluster config:
	{Name:download-only-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:28:11.892081    1620 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:11.896532    1620 out.go:97] Downloading VM boot image ...
	I0722 03:28:11.896546    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0722 03:28:18.263064    1620 out.go:97] Starting "download-only-521000" primary control-plane node in "download-only-521000" cluster
	I0722 03:28:18.263102    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:18.315099    1620 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0722 03:28:18.315117    1620 cache.go:56] Caching tarball of preloaded images
	I0722 03:28:18.315255    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:18.320553    1620 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0722 03:28:18.320559    1620 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:18.401544    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0722 03:28:28.716491    1620 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:28.716646    1620 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:29.412509    1620 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0722 03:28:29.412710    1620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-521000/config.json ...
	I0722 03:28:29.412742    1620 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-521000/config.json: {Name:mk9ba44c13276aeb01bcbfbf249d7d467b0155f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:28:29.412972    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:29.413153    1620 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0722 03:28:30.205356    1620 out.go:169] 
	W0722 03:28:30.209482    1620 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60] Decompressors:map[bz2:0x1400080cda0 gz:0x1400080cda8 tar:0x1400080cd30 tar.bz2:0x1400080cd40 tar.gz:0x1400080cd70 tar.xz:0x1400080cd80 tar.zst:0x1400080cd90 tbz2:0x1400080cd40 tgz:0x1400080cd70 txz:0x1400080cd80 tzst:0x1400080cd90 xz:0x1400080cdb0 zip:0x1400080cde0 zst:0x1400080cdb8] Getters:map[file:0x14000791600 http:0x1400098a280 https:0x1400098a2d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0722 03:28:30.209507    1620 out_reason.go:110] 
	W0722 03:28:30.215447    1620 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 03:28:30.219335    1620 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-521000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (18.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (9.81s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-634000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-634000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.670078s)

                                                
                                                
-- stdout --
	* [offline-docker-634000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-634000" primary control-plane node in "offline-docker-634000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-634000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:15:24.768470    4174 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:15:24.768600    4174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:24.768604    4174 out.go:304] Setting ErrFile to fd 2...
	I0722 04:15:24.768607    4174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:24.768729    4174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:15:24.770115    4174 out.go:298] Setting JSON to false
	I0722 04:15:24.787784    4174 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4493,"bootTime":1721642431,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:15:24.787860    4174 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:15:24.792798    4174 out.go:177] * [offline-docker-634000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:15:24.801482    4174 notify.go:220] Checking for updates...
	I0722 04:15:24.805580    4174 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:15:24.809735    4174 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:15:24.815569    4174 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:15:24.823544    4174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:15:24.831595    4174 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:15:24.839570    4174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:15:24.844031    4174 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:15:24.844092    4174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:15:24.846616    4174 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:15:24.853595    4174 start.go:297] selected driver: qemu2
	I0722 04:15:24.853603    4174 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:15:24.853609    4174 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:15:24.855527    4174 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:15:24.859510    4174 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:15:24.863661    4174 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:15:24.863682    4174 cni.go:84] Creating CNI manager for ""
	I0722 04:15:24.863689    4174 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:15:24.863694    4174 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:15:24.863744    4174 start.go:340] cluster config:
	{Name:offline-docker-634000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:15:24.867400    4174 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:15:24.877547    4174 out.go:177] * Starting "offline-docker-634000" primary control-plane node in "offline-docker-634000" cluster
	I0722 04:15:24.881655    4174 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:15:24.881690    4174 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:15:24.881700    4174 cache.go:56] Caching tarball of preloaded images
	I0722 04:15:24.881776    4174 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:15:24.881781    4174 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:15:24.881843    4174 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/offline-docker-634000/config.json ...
	I0722 04:15:24.881858    4174 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/offline-docker-634000/config.json: {Name:mk79db4df438fb8048702edac1645e1e91e19e9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:15:24.882039    4174 start.go:360] acquireMachinesLock for offline-docker-634000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:24.882071    4174 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "offline-docker-634000"
	I0722 04:15:24.882081    4174 start.go:93] Provisioning new machine with config: &{Name:offline-docker-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:24.882127    4174 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:24.886597    4174 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:24.902337    4174 start.go:159] libmachine.API.Create for "offline-docker-634000" (driver="qemu2")
	I0722 04:15:24.902369    4174 client.go:168] LocalClient.Create starting
	I0722 04:15:24.902442    4174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:24.902473    4174 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:24.902485    4174 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:24.902529    4174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:24.902553    4174 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:24.902560    4174 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:24.902900    4174 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:25.037297    4174 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:25.071745    4174 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:25.071752    4174 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:25.072022    4174 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2
	I0722 04:15:25.087793    4174 main.go:141] libmachine: STDOUT: 
	I0722 04:15:25.087821    4174 main.go:141] libmachine: STDERR: 
	I0722 04:15:25.087878    4174 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2 +20000M
	I0722 04:15:25.096254    4174 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:25.096274    4174 main.go:141] libmachine: STDERR: 
	I0722 04:15:25.096296    4174 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2
	I0722 04:15:25.096302    4174 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:25.096316    4174 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:25.096340    4174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:85:e7:01:99:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2
	I0722 04:15:25.098180    4174 main.go:141] libmachine: STDOUT: 
	I0722 04:15:25.098199    4174 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:25.098220    4174 client.go:171] duration metric: took 195.848834ms to LocalClient.Create
	I0722 04:15:27.100308    4174 start.go:128] duration metric: took 2.218192625s to createHost
	I0722 04:15:27.100341    4174 start.go:83] releasing machines lock for "offline-docker-634000", held for 2.218292208s
	W0722 04:15:27.100405    4174 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:27.111630    4174 out.go:177] * Deleting "offline-docker-634000" in qemu2 ...
	W0722 04:15:27.121667    4174 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:27.121744    4174 start.go:729] Will try again in 5 seconds ...
	I0722 04:15:32.123785    4174 start.go:360] acquireMachinesLock for offline-docker-634000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:32.123895    4174 start.go:364] duration metric: took 86.958µs to acquireMachinesLock for "offline-docker-634000"
	I0722 04:15:32.123921    4174 start.go:93] Provisioning new machine with config: &{Name:offline-docker-634000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-634000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:32.123981    4174 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:32.132684    4174 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:32.153391    4174 start.go:159] libmachine.API.Create for "offline-docker-634000" (driver="qemu2")
	I0722 04:15:32.153418    4174 client.go:168] LocalClient.Create starting
	I0722 04:15:32.153484    4174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:32.153519    4174 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:32.153529    4174 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:32.153570    4174 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:32.153595    4174 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:32.153601    4174 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:32.153869    4174 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:32.285184    4174 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:32.338909    4174 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:32.338914    4174 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:32.339079    4174 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2
	I0722 04:15:32.348060    4174 main.go:141] libmachine: STDOUT: 
	I0722 04:15:32.348078    4174 main.go:141] libmachine: STDERR: 
	I0722 04:15:32.348134    4174 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2 +20000M
	I0722 04:15:32.356031    4174 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:32.356044    4174 main.go:141] libmachine: STDERR: 
	I0722 04:15:32.356057    4174 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2
	I0722 04:15:32.356061    4174 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:32.356073    4174 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:32.356103    4174 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:7a:f2:10:4d:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/offline-docker-634000/disk.qcow2
	I0722 04:15:32.357547    4174 main.go:141] libmachine: STDOUT: 
	I0722 04:15:32.357563    4174 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:32.357574    4174 client.go:171] duration metric: took 204.155333ms to LocalClient.Create
	I0722 04:15:34.359776    4174 start.go:128] duration metric: took 2.235787042s to createHost
	I0722 04:15:34.359858    4174 start.go:83] releasing machines lock for "offline-docker-634000", held for 2.23597725s
	W0722 04:15:34.360235    4174 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-634000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:34.385876    4174 out.go:177] 
	W0722 04:15:34.388943    4174 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:15:34.388983    4174 out.go:239] * 
	* 
	W0722 04:15:34.390749    4174 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:15:34.400741    4174 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-634000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-07-22 04:15:34.412042 -0700 PDT m=+2842.687046626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-634000 -n offline-docker-634000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-634000 -n offline-docker-634000: exit status 7 (67.205416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-634000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-634000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-634000
--- FAIL: TestOffline (9.81s)

                                                
                                    
x
+
TestCertOptions (10.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-139000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-139000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.745213875s)

                                                
                                                
-- stdout --
	* [cert-options-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-139000" primary control-plane node in "cert-options-139000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-139000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-139000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-139000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-139000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.174875ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-139000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-139000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-139000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-139000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-139000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-139000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (46.621833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-139000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-139000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-139000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-139000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-139000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-07-22 04:16:05.814822 -0700 PDT m=+2874.090217876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-139000 -n cert-options-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-139000 -n cert-options-139000: exit status 7 (28.360875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-139000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-139000
--- FAIL: TestCertOptions (10.02s)

                                                
                                    
x
+
TestCertExpiration (195.33s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.979605167s)

                                                
                                                
-- stdout --
	* [cert-expiration-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-966000" primary control-plane node in "cert-expiration-966000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-966000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-966000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.189674208s)

                                                
                                                
-- stdout --
	* [cert-expiration-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-966000" primary control-plane node in "cert-expiration-966000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-966000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-966000" primary control-plane node in "cert-expiration-966000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-22 04:19:05.946887 -0700 PDT m=+3054.225697834
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-966000 -n cert-expiration-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-966000 -n cert-expiration-966000: exit status 7 (69.415625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-966000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-966000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-966000
--- FAIL: TestCertExpiration (195.33s)

                                                
                                    
x
+
TestDockerFlags (10.14s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-973000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-973000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.918589625s)

                                                
                                                
-- stdout --
	* [docker-flags-973000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-973000" primary control-plane node in "docker-flags-973000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-973000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:15:45.790823    4373 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:15:45.790958    4373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:45.790962    4373 out.go:304] Setting ErrFile to fd 2...
	I0722 04:15:45.790964    4373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:45.791127    4373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:15:45.792218    4373 out.go:298] Setting JSON to false
	I0722 04:15:45.808072    4373 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4514,"bootTime":1721642431,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:15:45.808150    4373 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:15:45.815007    4373 out.go:177] * [docker-flags-973000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:15:45.822981    4373 notify.go:220] Checking for updates...
	I0722 04:15:45.827790    4373 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:15:45.833963    4373 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:15:45.836858    4373 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:15:45.844947    4373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:15:45.852896    4373 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:15:45.860888    4373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:15:45.865234    4373 config.go:182] Loaded profile config "force-systemd-flag-708000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:15:45.865310    4373 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:15:45.865360    4373 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:15:45.868954    4373 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:15:45.875946    4373 start.go:297] selected driver: qemu2
	I0722 04:15:45.875952    4373 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:15:45.875959    4373 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:15:45.878501    4373 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:15:45.882980    4373 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:15:45.887075    4373 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0722 04:15:45.887123    4373 cni.go:84] Creating CNI manager for ""
	I0722 04:15:45.887131    4373 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:15:45.887136    4373 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:15:45.887178    4373 start.go:340] cluster config:
	{Name:docker-flags-973000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-973000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:15:45.891365    4373 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:15:45.897859    4373 out.go:177] * Starting "docker-flags-973000" primary control-plane node in "docker-flags-973000" cluster
	I0722 04:15:45.900954    4373 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:15:45.900987    4373 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:15:45.901003    4373 cache.go:56] Caching tarball of preloaded images
	I0722 04:15:45.901090    4373 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:15:45.901097    4373 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:15:45.901173    4373 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/docker-flags-973000/config.json ...
	I0722 04:15:45.901199    4373 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/docker-flags-973000/config.json: {Name:mk4a6268f37a4f688e45fc4ccd20e920b6166c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:15:45.901433    4373 start.go:360] acquireMachinesLock for docker-flags-973000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:45.901475    4373 start.go:364] duration metric: took 33.083µs to acquireMachinesLock for "docker-flags-973000"
	I0722 04:15:45.901488    4373 start.go:93] Provisioning new machine with config: &{Name:docker-flags-973000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-973000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:45.901535    4373 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:45.909920    4373 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:45.929843    4373 start.go:159] libmachine.API.Create for "docker-flags-973000" (driver="qemu2")
	I0722 04:15:45.929881    4373 client.go:168] LocalClient.Create starting
	I0722 04:15:45.929958    4373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:45.929999    4373 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:45.930010    4373 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:45.930058    4373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:45.930085    4373 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:45.930093    4373 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:45.930555    4373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:46.063882    4373 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:46.148249    4373 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:46.148254    4373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:46.148450    4373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2
	I0722 04:15:46.157464    4373 main.go:141] libmachine: STDOUT: 
	I0722 04:15:46.157520    4373 main.go:141] libmachine: STDERR: 
	I0722 04:15:46.157575    4373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2 +20000M
	I0722 04:15:46.165322    4373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:46.165337    4373 main.go:141] libmachine: STDERR: 
	I0722 04:15:46.165355    4373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2
	I0722 04:15:46.165360    4373 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:46.165377    4373 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:46.165401    4373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:21:2b:1e:8d:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2
	I0722 04:15:46.166895    4373 main.go:141] libmachine: STDOUT: 
	I0722 04:15:46.166910    4373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:46.166927    4373 client.go:171] duration metric: took 237.044ms to LocalClient.Create
	I0722 04:15:48.169084    4373 start.go:128] duration metric: took 2.267555375s to createHost
	I0722 04:15:48.169132    4373 start.go:83] releasing machines lock for "docker-flags-973000", held for 2.267672958s
	W0722 04:15:48.169191    4373 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:48.190369    4373 out.go:177] * Deleting "docker-flags-973000" in qemu2 ...
	W0722 04:15:48.212175    4373 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:48.212210    4373 start.go:729] Will try again in 5 seconds ...
	I0722 04:15:53.214411    4373 start.go:360] acquireMachinesLock for docker-flags-973000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:53.238137    4373 start.go:364] duration metric: took 23.500959ms to acquireMachinesLock for "docker-flags-973000"
	I0722 04:15:53.238239    4373 start.go:93] Provisioning new machine with config: &{Name:docker-flags-973000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-973000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:53.238535    4373 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:53.259119    4373 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:53.307556    4373 start.go:159] libmachine.API.Create for "docker-flags-973000" (driver="qemu2")
	I0722 04:15:53.307602    4373 client.go:168] LocalClient.Create starting
	I0722 04:15:53.307751    4373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:53.307829    4373 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:53.307844    4373 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:53.307908    4373 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:53.307964    4373 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:53.307975    4373 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:53.308557    4373 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:53.457837    4373 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:53.615734    4373 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:53.615741    4373 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:53.615974    4373 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2
	I0722 04:15:53.625175    4373 main.go:141] libmachine: STDOUT: 
	I0722 04:15:53.625199    4373 main.go:141] libmachine: STDERR: 
	I0722 04:15:53.625254    4373 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2 +20000M
	I0722 04:15:53.633140    4373 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:53.633154    4373 main.go:141] libmachine: STDERR: 
	I0722 04:15:53.633165    4373 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2
	I0722 04:15:53.633170    4373 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:53.633181    4373 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:53.633204    4373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:de:d3:e3:32:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/docker-flags-973000/disk.qcow2
	I0722 04:15:53.634652    4373 main.go:141] libmachine: STDOUT: 
	I0722 04:15:53.634667    4373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:53.634679    4373 client.go:171] duration metric: took 327.075042ms to LocalClient.Create
	I0722 04:15:55.636836    4373 start.go:128] duration metric: took 2.398304083s to createHost
	I0722 04:15:55.636886    4373 start.go:83] releasing machines lock for "docker-flags-973000", held for 2.398726875s
	W0722 04:15:55.637244    4373 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-973000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:55.649918    4373 out.go:177] 
	W0722 04:15:55.655048    4373 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:15:55.655093    4373 out.go:239] * 
	* 
	W0722 04:15:55.657554    4373 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:15:55.668878    4373 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-973000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-973000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-973000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (72.667458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-973000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-973000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-973000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-973000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-973000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-973000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-973000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-973000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-973000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.446458ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-973000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-973000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-973000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-973000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-973000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-973000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-07-22 04:15:55.804001 -0700 PDT m=+2864.079272501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-973000 -n docker-flags-973000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-973000 -n docker-flags-973000: exit status 7 (28.1575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-973000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-973000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-973000
--- FAIL: TestDockerFlags (10.14s)

                                                
                                    
x
+
TestForceSystemdFlag (9.88s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-708000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-708000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.690222667s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-708000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-708000" primary control-plane node in "force-systemd-flag-708000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-708000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:15:40.927844    4348 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:15:40.927967    4348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:40.927970    4348 out.go:304] Setting ErrFile to fd 2...
	I0722 04:15:40.927973    4348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:40.928090    4348 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:15:40.929136    4348 out.go:298] Setting JSON to false
	I0722 04:15:40.944863    4348 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4509,"bootTime":1721642431,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:15:40.944931    4348 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:15:40.950125    4348 out.go:177] * [force-systemd-flag-708000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:15:40.958079    4348 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:15:40.958118    4348 notify.go:220] Checking for updates...
	I0722 04:15:40.966054    4348 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:15:40.970039    4348 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:15:40.973081    4348 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:15:40.976067    4348 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:15:40.979092    4348 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:15:40.982363    4348 config.go:182] Loaded profile config "force-systemd-env-139000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:15:40.982434    4348 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:15:40.982492    4348 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:15:40.987046    4348 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:15:40.994041    4348 start.go:297] selected driver: qemu2
	I0722 04:15:40.994048    4348 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:15:40.994053    4348 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:15:40.996356    4348 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:15:40.999046    4348 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:15:41.002049    4348 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 04:15:41.002061    4348 cni.go:84] Creating CNI manager for ""
	I0722 04:15:41.002069    4348 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:15:41.002074    4348 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:15:41.002100    4348 start.go:340] cluster config:
	{Name:force-systemd-flag-708000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:15:41.005755    4348 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:15:41.012902    4348 out.go:177] * Starting "force-systemd-flag-708000" primary control-plane node in "force-systemd-flag-708000" cluster
	I0722 04:15:41.017036    4348 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:15:41.017049    4348 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:15:41.017059    4348 cache.go:56] Caching tarball of preloaded images
	I0722 04:15:41.017112    4348 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:15:41.017117    4348 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:15:41.017180    4348 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/force-systemd-flag-708000/config.json ...
	I0722 04:15:41.017193    4348 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/force-systemd-flag-708000/config.json: {Name:mk3a6acafe601f2454818e59aa4bc464bc6b91ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:15:41.017507    4348 start.go:360] acquireMachinesLock for force-systemd-flag-708000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:41.017544    4348 start.go:364] duration metric: took 27.334µs to acquireMachinesLock for "force-systemd-flag-708000"
	I0722 04:15:41.017554    4348 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:41.017583    4348 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:41.020931    4348 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:41.037774    4348 start.go:159] libmachine.API.Create for "force-systemd-flag-708000" (driver="qemu2")
	I0722 04:15:41.037806    4348 client.go:168] LocalClient.Create starting
	I0722 04:15:41.037876    4348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:41.037911    4348 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:41.037920    4348 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:41.037961    4348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:41.037985    4348 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:41.037995    4348 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:41.038369    4348 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:41.166105    4348 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:41.217360    4348 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:41.217367    4348 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:41.217541    4348 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2
	I0722 04:15:41.226365    4348 main.go:141] libmachine: STDOUT: 
	I0722 04:15:41.226385    4348 main.go:141] libmachine: STDERR: 
	I0722 04:15:41.226432    4348 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2 +20000M
	I0722 04:15:41.234254    4348 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:41.234268    4348 main.go:141] libmachine: STDERR: 
	I0722 04:15:41.234285    4348 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2
	I0722 04:15:41.234290    4348 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:41.234299    4348 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:41.234327    4348 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:87:3c:8c:fb:56 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2
	I0722 04:15:41.235863    4348 main.go:141] libmachine: STDOUT: 
	I0722 04:15:41.235881    4348 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:41.235903    4348 client.go:171] duration metric: took 198.094167ms to LocalClient.Create
	I0722 04:15:43.238055    4348 start.go:128] duration metric: took 2.220479125s to createHost
	I0722 04:15:43.238129    4348 start.go:83] releasing machines lock for "force-systemd-flag-708000", held for 2.220603709s
	W0722 04:15:43.238193    4348 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:43.275309    4348 out.go:177] * Deleting "force-systemd-flag-708000" in qemu2 ...
	W0722 04:15:43.293054    4348 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:43.293075    4348 start.go:729] Will try again in 5 seconds ...
	I0722 04:15:48.295336    4348 start.go:360] acquireMachinesLock for force-systemd-flag-708000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:48.295767    4348 start.go:364] duration metric: took 317µs to acquireMachinesLock for "force-systemd-flag-708000"
	I0722 04:15:48.295870    4348 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-708000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-708000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:48.296138    4348 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:48.305500    4348 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:48.355850    4348 start.go:159] libmachine.API.Create for "force-systemd-flag-708000" (driver="qemu2")
	I0722 04:15:48.355900    4348 client.go:168] LocalClient.Create starting
	I0722 04:15:48.356028    4348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:48.356091    4348 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:48.356106    4348 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:48.356164    4348 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:48.356207    4348 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:48.356221    4348 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:48.357177    4348 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:48.498535    4348 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:48.528720    4348 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:48.528725    4348 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:48.528895    4348 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2
	I0722 04:15:48.537858    4348 main.go:141] libmachine: STDOUT: 
	I0722 04:15:48.537878    4348 main.go:141] libmachine: STDERR: 
	I0722 04:15:48.537928    4348 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2 +20000M
	I0722 04:15:48.545675    4348 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:48.545691    4348 main.go:141] libmachine: STDERR: 
	I0722 04:15:48.545704    4348 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2
	I0722 04:15:48.545716    4348 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:48.545729    4348 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:48.545757    4348 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:c2:07:e9:19:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-flag-708000/disk.qcow2
	I0722 04:15:48.547300    4348 main.go:141] libmachine: STDOUT: 
	I0722 04:15:48.547313    4348 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:48.547324    4348 client.go:171] duration metric: took 191.418791ms to LocalClient.Create
	I0722 04:15:50.549501    4348 start.go:128] duration metric: took 2.25334875s to createHost
	I0722 04:15:50.549637    4348 start.go:83] releasing machines lock for "force-systemd-flag-708000", held for 2.253827834s
	W0722 04:15:50.550021    4348 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-708000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:50.562607    4348 out.go:177] 
	W0722 04:15:50.566695    4348 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:15:50.566742    4348 out.go:239] * 
	* 
	W0722 04:15:50.569220    4348 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:15:50.578595    4348 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-708000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-708000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-708000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (74.124833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-708000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-708000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-708000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-07-22 04:15:50.669251 -0700 PDT m=+2858.944458043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-708000 -n force-systemd-flag-708000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-708000 -n force-systemd-flag-708000: exit status 7 (35.516542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-708000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-708000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-708000
--- FAIL: TestForceSystemdFlag (9.88s)

                                                
                                    
x
+
TestForceSystemdEnv (11.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-139000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-139000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.013715667s)

                                                
                                                
-- stdout --
	* [force-systemd-env-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-139000" primary control-plane node in "force-systemd-env-139000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-139000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:15:34.581953    4314 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:15:34.582070    4314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:34.582074    4314 out.go:304] Setting ErrFile to fd 2...
	I0722 04:15:34.582076    4314 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:15:34.582206    4314 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:15:34.583227    4314 out.go:298] Setting JSON to false
	I0722 04:15:34.598859    4314 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4503,"bootTime":1721642431,"procs":453,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:15:34.598934    4314 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:15:34.604291    4314 out.go:177] * [force-systemd-env-139000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:15:34.611379    4314 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:15:34.611434    4314 notify.go:220] Checking for updates...
	I0722 04:15:34.618368    4314 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:15:34.621336    4314 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:15:34.624361    4314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:15:34.627326    4314 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:15:34.630383    4314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0722 04:15:34.633650    4314 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:15:34.633706    4314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:15:34.641428    4314 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:15:34.648277    4314 start.go:297] selected driver: qemu2
	I0722 04:15:34.648285    4314 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:15:34.648291    4314 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:15:34.650689    4314 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:15:34.654306    4314 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:15:34.657412    4314 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 04:15:34.657445    4314 cni.go:84] Creating CNI manager for ""
	I0722 04:15:34.657456    4314 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:15:34.657460    4314 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:15:34.657483    4314 start.go:340] cluster config:
	{Name:force-systemd-env-139000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:15:34.661250    4314 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:15:34.668363    4314 out.go:177] * Starting "force-systemd-env-139000" primary control-plane node in "force-systemd-env-139000" cluster
	I0722 04:15:34.671270    4314 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:15:34.671296    4314 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:15:34.671307    4314 cache.go:56] Caching tarball of preloaded images
	I0722 04:15:34.671380    4314 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:15:34.671386    4314 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:15:34.671447    4314 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/force-systemd-env-139000/config.json ...
	I0722 04:15:34.671470    4314 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/force-systemd-env-139000/config.json: {Name:mk6bf17a8bdb4b89d59cd72c1b6e3d47e9c35ba8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:15:34.671699    4314 start.go:360] acquireMachinesLock for force-systemd-env-139000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:34.671737    4314 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "force-systemd-env-139000"
	I0722 04:15:34.671748    4314 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:34.671780    4314 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:34.679293    4314 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:34.697450    4314 start.go:159] libmachine.API.Create for "force-systemd-env-139000" (driver="qemu2")
	I0722 04:15:34.697482    4314 client.go:168] LocalClient.Create starting
	I0722 04:15:34.697556    4314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:34.697586    4314 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:34.697596    4314 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:34.697632    4314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:34.697657    4314 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:34.697667    4314 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:34.698009    4314 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:34.830447    4314 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:34.906324    4314 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:34.906331    4314 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:34.906519    4314 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2
	I0722 04:15:34.916250    4314 main.go:141] libmachine: STDOUT: 
	I0722 04:15:34.916281    4314 main.go:141] libmachine: STDERR: 
	I0722 04:15:34.916343    4314 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2 +20000M
	I0722 04:15:34.926707    4314 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:34.926730    4314 main.go:141] libmachine: STDERR: 
	I0722 04:15:34.926744    4314 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2
	I0722 04:15:34.926749    4314 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:34.926771    4314 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:34.926803    4314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:12:d3:5d:41:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2
	I0722 04:15:34.928398    4314 main.go:141] libmachine: STDOUT: 
	I0722 04:15:34.928422    4314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:34.928439    4314 client.go:171] duration metric: took 230.955625ms to LocalClient.Create
	I0722 04:15:36.930505    4314 start.go:128] duration metric: took 2.258746583s to createHost
	I0722 04:15:36.930529    4314 start.go:83] releasing machines lock for "force-systemd-env-139000", held for 2.258815375s
	W0722 04:15:36.930540    4314 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:36.943068    4314 out.go:177] * Deleting "force-systemd-env-139000" in qemu2 ...
	W0722 04:15:36.952536    4314 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:36.952547    4314 start.go:729] Will try again in 5 seconds ...
	I0722 04:15:41.954657    4314 start.go:360] acquireMachinesLock for force-systemd-env-139000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:15:43.238296    4314 start.go:364] duration metric: took 1.283541708s to acquireMachinesLock for "force-systemd-env-139000"
	I0722 04:15:43.238430    4314 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-139000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-139000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:15:43.238685    4314 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:15:43.264277    4314 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0722 04:15:43.312270    4314 start.go:159] libmachine.API.Create for "force-systemd-env-139000" (driver="qemu2")
	I0722 04:15:43.312327    4314 client.go:168] LocalClient.Create starting
	I0722 04:15:43.312459    4314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:15:43.312523    4314 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:43.312539    4314 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:43.312608    4314 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:15:43.312655    4314 main.go:141] libmachine: Decoding PEM data...
	I0722 04:15:43.312667    4314 main.go:141] libmachine: Parsing certificate...
	I0722 04:15:43.313220    4314 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:15:43.459964    4314 main.go:141] libmachine: Creating SSH key...
	I0722 04:15:43.506817    4314 main.go:141] libmachine: Creating Disk image...
	I0722 04:15:43.506823    4314 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:15:43.507013    4314 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2
	I0722 04:15:43.515886    4314 main.go:141] libmachine: STDOUT: 
	I0722 04:15:43.515903    4314 main.go:141] libmachine: STDERR: 
	I0722 04:15:43.515979    4314 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2 +20000M
	I0722 04:15:43.523712    4314 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:15:43.523730    4314 main.go:141] libmachine: STDERR: 
	I0722 04:15:43.523742    4314 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2
	I0722 04:15:43.523747    4314 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:15:43.523758    4314 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:15:43.523797    4314 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:8e:5a:b4:b2:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/force-systemd-env-139000/disk.qcow2
	I0722 04:15:43.525351    4314 main.go:141] libmachine: STDOUT: 
	I0722 04:15:43.525368    4314 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:15:43.525380    4314 client.go:171] duration metric: took 213.049542ms to LocalClient.Create
	I0722 04:15:45.527690    4314 start.go:128] duration metric: took 2.288973042s to createHost
	I0722 04:15:45.527789    4314 start.go:83] releasing machines lock for "force-systemd-env-139000", held for 2.289478459s
	W0722 04:15:45.528112    4314 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-139000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:45.538762    4314 out.go:177] 
	W0722 04:15:45.542771    4314 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:15:45.542805    4314 out.go:239] * 
	* 
	W0722 04:15:45.545640    4314 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:15:45.554768    4314 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-139000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-139000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-139000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.360042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-139000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-139000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-139000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-07-22 04:15:45.650162 -0700 PDT m=+2853.925306293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-139000 -n force-systemd-env-139000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-139000 -n force-systemd-env-139000: exit status 7 (34.617583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-139000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-139000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-139000
--- FAIL: TestForceSystemdEnv (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-753000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-753000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-bgvlc" [4bc78060-362a-46b3-a035-ec5f1389e976] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-bgvlc" [4bc78060-362a-46b3-a035-ec5f1389e976] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0037535s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.105.4:31537
functional_test.go:1657: error fetching http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1657: error fetching http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1677: failed to fetch http://192.168.105.4:31537: Get "http://192.168.105.4:31537": dial tcp 192.168.105.4:31537: connect: connection refused
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-753000 describe po hello-node-connect
functional_test.go:1602: hello-node pod describe:
Name:             hello-node-connect-6f49f58cd5-bgvlc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-753000/192.168.105.4
Start Time:       Mon, 22 Jul 2024 03:40:02 -0700
Labels:           app=hello-node-connect
pod-template-hash=6f49f58cd5
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-6f49f58cd5
Containers:
echoserver-arm:
Container ID:   docker://c2836a21d7467715b623ba6959f48de209fcae60028ff3445bde9443dde56abc
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 22 Jul 2024 03:40:19 -0700
Finished:     Mon, 22 Jul 2024 03:40:19 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Mon, 22 Jul 2024 03:40:04 -0700
Finished:     Mon, 22 Jul 2024 03:40:04 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qp4n5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-qp4n5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  26s                default-scheduler  Successfully assigned default/hello-node-connect-6f49f58cd5-bgvlc to functional-753000
Normal   Pulled     10s (x3 over 25s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    9s (x3 over 25s)   kubelet            Created container echoserver-arm
Normal   Started    9s (x3 over 25s)   kubelet            Started container echoserver-arm
Warning  BackOff    9s (x3 over 23s)   kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-6f49f58cd5-bgvlc_default(4bc78060-362a-46b3-a035-ec5f1389e976)

                                                
                                                
functional_test.go:1604: (dbg) Run:  kubectl --context functional-753000 logs -l app=hello-node-connect
functional_test.go:1608: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1610: (dbg) Run:  kubectl --context functional-753000 describe svc hello-node-connect
functional_test.go:1614: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.102.108.140
IPs:                      10.102.108.140
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31537/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-753000 -n functional-753000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	| image   | functional-753000 image save                                                                                    | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image rm                                                                                      | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	| image   | functional-753000 image load                                                                                    | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-753000 image ls                                                                                      | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	| image   | functional-753000 image save --daemon                                                                           | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	|         | docker.io/kicbase/echo-server:functional-753000                                                                 |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh echo                                                                                      | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	|         | hello                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh cat                                                                                       | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT | 22 Jul 24 03:39 PDT |
	|         | /etc/hostname                                                                                                   |                   |         |         |                     |                     |
	| tunnel  | functional-753000 tunnel                                                                                        | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-753000 tunnel                                                                                        | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-753000 tunnel                                                                                        | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:39 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| service | functional-753000 service list                                                                                  | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	| service | functional-753000 service list                                                                                  | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-753000 service                                                                                       | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-753000                                                                                               | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-753000 service                                                                                       | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| addons  | functional-753000 addons list                                                                                   | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	| addons  | functional-753000 addons list                                                                                   | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-753000 service                                                                                       | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh findmnt                                                                                   | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| mount   | -p functional-753000                                                                                            | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1539469767/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh findmnt                                                                                   | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh -- ls                                                                                     | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-753000 ssh cat                                                                                       | functional-753000 | jenkins | v1.33.1 | 22 Jul 24 03:40 PDT | 22 Jul 24 03:40 PDT |
	|         | /mount-9p/test-1721644827147790000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:39:06
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:39:06.520992    2324 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:39:06.521109    2324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:39:06.521111    2324 out.go:304] Setting ErrFile to fd 2...
	I0722 03:39:06.521113    2324 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:39:06.521256    2324 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:39:06.522362    2324 out.go:298] Setting JSON to false
	I0722 03:39:06.538948    2324 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2315,"bootTime":1721642431,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 03:39:06.539010    2324 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:39:06.543648    2324 out.go:177] * [functional-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 03:39:06.551646    2324 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:39:06.551690    2324 notify.go:220] Checking for updates...
	I0722 03:39:06.559581    2324 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:39:06.562520    2324 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 03:39:06.565569    2324 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:39:06.568561    2324 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 03:39:06.571526    2324 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:39:06.574803    2324 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:39:06.574853    2324 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:39:06.578547    2324 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 03:39:06.585543    2324 start.go:297] selected driver: qemu2
	I0722 03:39:06.585546    2324 start.go:901] validating driver "qemu2" against &{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:39:06.585600    2324 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:39:06.587757    2324 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:39:06.587774    2324 cni.go:84] Creating CNI manager for ""
	I0722 03:39:06.587780    2324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 03:39:06.587819    2324 start.go:340] cluster config:
	{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:39:06.591082    2324 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:39:06.598564    2324 out.go:177] * Starting "functional-753000" primary control-plane node in "functional-753000" cluster
	I0722 03:39:06.602551    2324 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:39:06.602563    2324 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 03:39:06.602571    2324 cache.go:56] Caching tarball of preloaded images
	I0722 03:39:06.602622    2324 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 03:39:06.602625    2324 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:39:06.602674    2324 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/config.json ...
	I0722 03:39:06.603105    2324 start.go:360] acquireMachinesLock for functional-753000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:39:06.603135    2324 start.go:364] duration metric: took 25.25µs to acquireMachinesLock for "functional-753000"
	I0722 03:39:06.603142    2324 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:39:06.603148    2324 fix.go:54] fixHost starting: 
	I0722 03:39:06.603729    2324 fix.go:112] recreateIfNeeded on functional-753000: state=Running err=<nil>
	W0722 03:39:06.603735    2324 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:39:06.607558    2324 out.go:177] * Updating the running qemu2 "functional-753000" VM ...
	I0722 03:39:06.615592    2324 machine.go:94] provisionDockerMachine start ...
	I0722 03:39:06.615632    2324 main.go:141] libmachine: Using SSH client type: native
	I0722 03:39:06.615740    2324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10273aa10] 0x10273d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0722 03:39:06.615742    2324 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:39:06.657374    2324 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753000
	
	I0722 03:39:06.657384    2324 buildroot.go:166] provisioning hostname "functional-753000"
	I0722 03:39:06.657421    2324 main.go:141] libmachine: Using SSH client type: native
	I0722 03:39:06.657535    2324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10273aa10] 0x10273d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0722 03:39:06.657539    2324 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-753000 && echo "functional-753000" | sudo tee /etc/hostname
	I0722 03:39:06.703214    2324 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-753000
	
	I0722 03:39:06.703257    2324 main.go:141] libmachine: Using SSH client type: native
	I0722 03:39:06.703374    2324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10273aa10] 0x10273d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0722 03:39:06.703380    2324 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-753000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-753000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-753000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:39:06.745185    2324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:39:06.745194    2324 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1127/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1127/.minikube}
	I0722 03:39:06.745203    2324 buildroot.go:174] setting up certificates
	I0722 03:39:06.745206    2324 provision.go:84] configureAuth start
	I0722 03:39:06.745209    2324 provision.go:143] copyHostCerts
	I0722 03:39:06.745267    2324 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem, removing ...
	I0722 03:39:06.745271    2324 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem
	I0722 03:39:06.745396    2324 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem (1078 bytes)
	I0722 03:39:06.745563    2324 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem, removing ...
	I0722 03:39:06.745565    2324 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem
	I0722 03:39:06.745611    2324 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem (1123 bytes)
	I0722 03:39:06.745710    2324 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem, removing ...
	I0722 03:39:06.745712    2324 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem
	I0722 03:39:06.745753    2324 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem (1675 bytes)
	I0722 03:39:06.745828    2324 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem org=jenkins.functional-753000 san=[127.0.0.1 192.168.105.4 functional-753000 localhost minikube]
	I0722 03:39:07.066419    2324 provision.go:177] copyRemoteCerts
	I0722 03:39:07.066462    2324 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:39:07.066470    2324 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0722 03:39:07.092246    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:39:07.100859    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 03:39:07.108931    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 03:39:07.117001    2324 provision.go:87] duration metric: took 371.786459ms to configureAuth
	I0722 03:39:07.117009    2324 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:39:07.117115    2324 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:39:07.117144    2324 main.go:141] libmachine: Using SSH client type: native
	I0722 03:39:07.117226    2324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10273aa10] 0x10273d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0722 03:39:07.117230    2324 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:39:07.160116    2324 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:39:07.160121    2324 buildroot.go:70] root file system type: tmpfs
	I0722 03:39:07.160170    2324 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:39:07.160219    2324 main.go:141] libmachine: Using SSH client type: native
	I0722 03:39:07.160323    2324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10273aa10] 0x10273d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0722 03:39:07.160353    2324 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:39:07.206500    2324 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:39:07.206555    2324 main.go:141] libmachine: Using SSH client type: native
	I0722 03:39:07.206677    2324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10273aa10] 0x10273d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0722 03:39:07.206683    2324 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:39:07.251437    2324 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:39:07.251443    2324 machine.go:97] duration metric: took 635.849875ms to provisionDockerMachine
	I0722 03:39:07.251448    2324 start.go:293] postStartSetup for "functional-753000" (driver="qemu2")
	I0722 03:39:07.251453    2324 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:39:07.251491    2324 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:39:07.251498    2324 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0722 03:39:07.275095    2324 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:39:07.276724    2324 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:39:07.276730    2324 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/addons for local assets ...
	I0722 03:39:07.276830    2324 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/files for local assets ...
	I0722 03:39:07.276957    2324 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
	I0722 03:39:07.277074    2324 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/test/nested/copy/1618/hosts -> hosts in /etc/test/nested/copy/1618
	I0722 03:39:07.277112    2324 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1618
	I0722 03:39:07.280674    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
	I0722 03:39:07.289314    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/test/nested/copy/1618/hosts --> /etc/test/nested/copy/1618/hosts (40 bytes)
	I0722 03:39:07.297258    2324 start.go:296] duration metric: took 45.806625ms for postStartSetup
	I0722 03:39:07.297269    2324 fix.go:56] duration metric: took 694.124375ms for fixHost
	I0722 03:39:07.297298    2324 main.go:141] libmachine: Using SSH client type: native
	I0722 03:39:07.297395    2324 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10273aa10] 0x10273d270 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0722 03:39:07.297397    2324 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 03:39:07.338180    2324 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721644747.367488184
	
	I0722 03:39:07.338185    2324 fix.go:216] guest clock: 1721644747.367488184
	I0722 03:39:07.338189    2324 fix.go:229] Guest: 2024-07-22 03:39:07.367488184 -0700 PDT Remote: 2024-07-22 03:39:07.29727 -0700 PDT m=+0.795120168 (delta=70.218184ms)
	I0722 03:39:07.338198    2324 fix.go:200] guest clock delta is within tolerance: 70.218184ms
	I0722 03:39:07.338200    2324 start.go:83] releasing machines lock for "functional-753000", held for 735.064042ms
	I0722 03:39:07.338450    2324 ssh_runner.go:195] Run: cat /version.json
	I0722 03:39:07.338455    2324 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0722 03:39:07.338484    2324 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:39:07.338501    2324 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0722 03:39:07.362404    2324 ssh_runner.go:195] Run: systemctl --version
	I0722 03:39:07.404535    2324 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 03:39:07.406432    2324 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:39:07.406454    2324 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:39:07.409921    2324 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0722 03:39:07.409926    2324 start.go:495] detecting cgroup driver to use...
	I0722 03:39:07.409985    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:39:07.416185    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:39:07.420401    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:39:07.424422    2324 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:39:07.424444    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:39:07.428530    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:39:07.432370    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:39:07.436363    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:39:07.440366    2324 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:39:07.444534    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:39:07.448016    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:39:07.452138    2324 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:39:07.456518    2324 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:39:07.460205    2324 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:39:07.464014    2324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:39:07.565943    2324 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:39:07.574197    2324 start.go:495] detecting cgroup driver to use...
	I0722 03:39:07.574287    2324 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:39:07.580810    2324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:39:07.586450    2324 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:39:07.595279    2324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:39:07.600920    2324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:39:07.606458    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:39:07.612882    2324 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:39:07.614287    2324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:39:07.617476    2324 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:39:07.623736    2324 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:39:07.718219    2324 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:39:07.808966    2324 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:39:07.809013    2324 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:39:07.815706    2324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:39:07.911457    2324 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:39:20.287294    2324 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.375852625s)
	I0722 03:39:20.287360    2324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 03:39:20.293856    2324 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0722 03:39:20.304016    2324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:39:20.309717    2324 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 03:39:20.384727    2324 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 03:39:20.475090    2324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:39:20.568294    2324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 03:39:20.574732    2324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:39:20.580055    2324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:39:20.652426    2324 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 03:39:20.680853    2324 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 03:39:20.680922    2324 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 03:39:20.683433    2324 start.go:563] Will wait 60s for crictl version
	I0722 03:39:20.683465    2324 ssh_runner.go:195] Run: which crictl
	I0722 03:39:20.684909    2324 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 03:39:20.698602    2324 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 03:39:20.698667    2324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:39:20.705840    2324 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:39:20.720692    2324 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 03:39:20.720768    2324 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0722 03:39:20.724622    2324 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0722 03:39:20.728666    2324 kubeadm.go:883] updating cluster {Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 03:39:20.728730    2324 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:39:20.728789    2324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:39:20.734581    2324 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-753000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0722 03:39:20.734585    2324 docker.go:615] Images already preloaded, skipping extraction
	I0722 03:39:20.734631    2324 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:39:20.740126    2324 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-753000
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0722 03:39:20.740133    2324 cache_images.go:84] Images are preloaded, skipping loading
	I0722 03:39:20.740137    2324 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.30.3 docker true true} ...
	I0722 03:39:20.740198    2324 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-753000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 03:39:20.740241    2324 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 03:39:20.747547    2324 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0722 03:39:20.747586    2324 cni.go:84] Creating CNI manager for ""
	I0722 03:39:20.747593    2324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 03:39:20.747606    2324 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 03:39:20.747616    2324 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-753000 NodeName:functional-753000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 03:39:20.747682    2324 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-753000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 03:39:20.747730    2324 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 03:39:20.751113    2324 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 03:39:20.751134    2324 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 03:39:20.754401    2324 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0722 03:39:20.760581    2324 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 03:39:20.766299    2324 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0722 03:39:20.772494    2324 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0722 03:39:20.773987    2324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:39:20.852881    2324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:39:20.858901    2324 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000 for IP: 192.168.105.4
	I0722 03:39:20.858904    2324 certs.go:194] generating shared ca certs ...
	I0722 03:39:20.858911    2324 certs.go:226] acquiring lock for ca certs: {Name:mk3f2c80d56e217629ae5cc59f1253ebc769d305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:39:20.859061    2324 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key
	I0722 03:39:20.859111    2324 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key
	I0722 03:39:20.859114    2324 certs.go:256] generating profile certs ...
	I0722 03:39:20.859170    2324 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.key
	I0722 03:39:20.859220    2324 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/apiserver.key.7b1be317
	I0722 03:39:20.859266    2324 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/proxy-client.key
	I0722 03:39:20.859415    2324 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem (1338 bytes)
	W0722 03:39:20.859441    2324 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618_empty.pem, impossibly tiny 0 bytes
	I0722 03:39:20.859445    2324 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 03:39:20.859463    2324 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem (1078 bytes)
	I0722 03:39:20.859483    2324 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem (1123 bytes)
	I0722 03:39:20.859500    2324 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem (1675 bytes)
	I0722 03:39:20.859540    2324 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem (1708 bytes)
	I0722 03:39:20.859832    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 03:39:20.868432    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 03:39:20.876960    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 03:39:20.885573    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 03:39:20.894473    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0722 03:39:20.902726    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 03:39:20.910926    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 03:39:20.919153    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 03:39:20.927463    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /usr/share/ca-certificates/16182.pem (1708 bytes)
	I0722 03:39:20.935526    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 03:39:20.943880    2324 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem --> /usr/share/ca-certificates/1618.pem (1338 bytes)
	I0722 03:39:20.952471    2324 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 03:39:20.958324    2324 ssh_runner.go:195] Run: openssl version
	I0722 03:39:20.960429    2324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16182.pem && ln -fs /usr/share/ca-certificates/16182.pem /etc/ssl/certs/16182.pem"
	I0722 03:39:20.963989    2324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16182.pem
	I0722 03:39:20.965717    2324 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:36 /usr/share/ca-certificates/16182.pem
	I0722 03:39:20.965736    2324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16182.pem
	I0722 03:39:20.968092    2324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16182.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 03:39:20.971371    2324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 03:39:20.975006    2324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:39:20.976612    2324 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:39:20.976629    2324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:39:20.978626    2324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 03:39:20.982075    2324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1618.pem && ln -fs /usr/share/ca-certificates/1618.pem /etc/ssl/certs/1618.pem"
	I0722 03:39:20.986085    2324 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1618.pem
	I0722 03:39:20.987678    2324 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:36 /usr/share/ca-certificates/1618.pem
	I0722 03:39:20.987693    2324 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1618.pem
	I0722 03:39:20.989702    2324 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1618.pem /etc/ssl/certs/51391683.0"
	I0722 03:39:20.993672    2324 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 03:39:20.995290    2324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 03:39:20.997240    2324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 03:39:20.999349    2324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 03:39:21.001330    2324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 03:39:21.003418    2324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 03:39:21.005435    2324 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 03:39:21.007623    2324 kubeadm.go:392] StartCluster: {Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:39:21.007694    2324 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 03:39:21.013666    2324 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 03:39:21.017756    2324 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 03:39:21.017759    2324 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 03:39:21.017778    2324 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 03:39:21.021370    2324 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:39:21.021660    2324 kubeconfig.go:125] found "functional-753000" server: "https://192.168.105.4:8441"
	I0722 03:39:21.022259    2324 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 03:39:21.025792    2324 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0722 03:39:21.025795    2324 kubeadm.go:1160] stopping kube-system containers ...
	I0722 03:39:21.025833    2324 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 03:39:21.040464    2324 docker.go:483] Stopping containers: [a5d96268847f da7f10269a75 39acd047c385 2de4ee23a536 1e68721f5bf6 60e9d6f81ad2 1ad2aab65136 676a3c027079 24fce56917d4 7f6e321decca 5c4aed695edd 67a97afb49f2 2513219ff851 8e648f87155b 1ddafb54406e 92e33480b37f c685c1ec88b4 86b74e5b6abe 6ba8a67d358b 506c3f8e865d 4ea72adebd81 49cc57b14ead 22d4bc360216 3bc6d9476828 4074abec8373 26e27e48bb4a 046aa8f8562f]
	I0722 03:39:21.040522    2324 ssh_runner.go:195] Run: docker stop a5d96268847f da7f10269a75 39acd047c385 2de4ee23a536 1e68721f5bf6 60e9d6f81ad2 1ad2aab65136 676a3c027079 24fce56917d4 7f6e321decca 5c4aed695edd 67a97afb49f2 2513219ff851 8e648f87155b 1ddafb54406e 92e33480b37f c685c1ec88b4 86b74e5b6abe 6ba8a67d358b 506c3f8e865d 4ea72adebd81 49cc57b14ead 22d4bc360216 3bc6d9476828 4074abec8373 26e27e48bb4a 046aa8f8562f
	I0722 03:39:21.047611    2324 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 03:39:21.143582    2324 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 03:39:21.148998    2324 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Jul 22 10:37 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Jul 22 10:38 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul 22 10:37 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5605 Jul 22 10:38 /etc/kubernetes/scheduler.conf
	
	I0722 03:39:21.149035    2324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0722 03:39:21.153840    2324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0722 03:39:21.158342    2324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0722 03:39:21.162339    2324 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:39:21.162358    2324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 03:39:21.166312    2324 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0722 03:39:21.170032    2324 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:39:21.170050    2324 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 03:39:21.173801    2324 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 03:39:21.177483    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 03:39:21.197980    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 03:39:21.799361    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 03:39:21.904682    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 03:39:21.927858    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 03:39:21.950510    2324 api_server.go:52] waiting for apiserver process to appear ...
	I0722 03:39:21.950577    2324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:39:22.452631    2324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:39:22.952620    2324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:39:22.957936    2324 api_server.go:72] duration metric: took 1.00743025s to wait for apiserver process to appear ...
	I0722 03:39:22.957941    2324 api_server.go:88] waiting for apiserver healthz status ...
	I0722 03:39:22.957951    2324 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0722 03:39:24.872421    2324 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 03:39:24.872430    2324 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 03:39:24.872435    2324 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0722 03:39:24.894633    2324 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 03:39:24.894640    2324 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 03:39:24.960008    2324 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0722 03:39:24.962651    2324 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 03:39:24.962657    2324 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 03:39:25.459989    2324 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0722 03:39:25.463471    2324 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 03:39:25.463478    2324 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 03:39:25.959980    2324 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0722 03:39:25.964531    2324 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0722 03:39:25.968309    2324 api_server.go:141] control plane version: v1.30.3
	I0722 03:39:25.968317    2324 api_server.go:131] duration metric: took 3.010379625s to wait for apiserver health ...
	I0722 03:39:25.968321    2324 cni.go:84] Creating CNI manager for ""
	I0722 03:39:25.968326    2324 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 03:39:25.971361    2324 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 03:39:25.975395    2324 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 03:39:25.979005    2324 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 03:39:25.984694    2324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 03:39:25.989010    2324 system_pods.go:59] 7 kube-system pods found
	I0722 03:39:25.989018    2324 system_pods.go:61] "coredns-7db6d8ff4d-pt7q4" [8bf37bef-29e4-4c67-a7b7-77c015e7763b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0722 03:39:25.989020    2324 system_pods.go:61] "etcd-functional-753000" [468471de-1030-42ed-bbf4-57f6c0dae6bd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0722 03:39:25.989023    2324 system_pods.go:61] "kube-apiserver-functional-753000" [26f03245-1f14-4c59-b69c-5dbd093ca602] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0722 03:39:25.989025    2324 system_pods.go:61] "kube-controller-manager-functional-753000" [6f16e5fc-45c2-44b4-8b9f-72a79431e530] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0722 03:39:25.989027    2324 system_pods.go:61] "kube-proxy-s89cr" [b8581931-2d55-4193-83d8-22133887efb3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0722 03:39:25.989029    2324 system_pods.go:61] "kube-scheduler-functional-753000" [8786b959-f435-4eca-b362-c2568e32d971] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0722 03:39:25.989031    2324 system_pods.go:61] "storage-provisioner" [bcabe724-4a68-4e41-b621-987f25a3ca6b] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0722 03:39:25.989033    2324 system_pods.go:74] duration metric: took 4.336084ms to wait for pod list to return data ...
	I0722 03:39:25.989035    2324 node_conditions.go:102] verifying NodePressure condition ...
	I0722 03:39:25.990374    2324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:39:25.990379    2324 node_conditions.go:123] node cpu capacity is 2
	I0722 03:39:25.990384    2324 node_conditions.go:105] duration metric: took 1.346792ms to run NodePressure ...
	I0722 03:39:25.990390    2324 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 03:39:26.211902    2324 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0722 03:39:26.214099    2324 kubeadm.go:739] kubelet initialised
	I0722 03:39:26.214102    2324 kubeadm.go:740] duration metric: took 2.192125ms waiting for restarted kubelet to initialise ...
	I0722 03:39:26.214106    2324 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:39:26.216879    2324 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:28.222593    2324 pod_ready.go:102] pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace has status "Ready":"False"
	I0722 03:39:30.721615    2324 pod_ready.go:102] pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace has status "Ready":"False"
	I0722 03:39:32.721760    2324 pod_ready.go:102] pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace has status "Ready":"False"
	I0722 03:39:33.221999    2324 pod_ready.go:92] pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:33.222007    2324 pod_ready.go:81] duration metric: took 7.005139583s for pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:33.222010    2324 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:35.226405    2324 pod_ready.go:102] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"False"
	I0722 03:39:35.726894    2324 pod_ready.go:92] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:35.726899    2324 pod_ready.go:81] duration metric: took 2.504892583s for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:35.726903    2324 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:36.235025    2324 pod_ready.go:92] pod "kube-apiserver-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:36.235033    2324 pod_ready.go:81] duration metric: took 508.128458ms for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:36.235037    2324 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.239358    2324 pod_ready.go:92] pod "kube-controller-manager-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:37.239363    2324 pod_ready.go:81] duration metric: took 1.00432525s for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.239367    2324 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s89cr" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.241090    2324 pod_ready.go:92] pod "kube-proxy-s89cr" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:37.241093    2324 pod_ready.go:81] duration metric: took 1.724084ms for pod "kube-proxy-s89cr" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.241096    2324 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.242779    2324 pod_ready.go:92] pod "kube-scheduler-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:37.242782    2324 pod_ready.go:81] duration metric: took 1.683416ms for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.242785    2324 pod_ready.go:38] duration metric: took 11.028701792s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:39:37.242793    2324 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 03:39:37.246974    2324 ops.go:34] apiserver oom_adj: -16
	I0722 03:39:37.246978    2324 kubeadm.go:597] duration metric: took 16.2292545s to restartPrimaryControlPlane
	I0722 03:39:37.246980    2324 kubeadm.go:394] duration metric: took 16.239398042s to StartCluster
	I0722 03:39:37.246988    2324 settings.go:142] acquiring lock: {Name:mk640939e683dda0ffda5b348284f38e73fbc066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:39:37.247074    2324 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:39:37.247427    2324 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:39:37.247659    2324 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 03:39:37.247669    2324 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 03:39:37.247707    2324 addons.go:69] Setting storage-provisioner=true in profile "functional-753000"
	I0722 03:39:37.247717    2324 addons.go:234] Setting addon storage-provisioner=true in "functional-753000"
	W0722 03:39:37.247719    2324 addons.go:243] addon storage-provisioner should already be in state true
	I0722 03:39:37.247729    2324 host.go:66] Checking if "functional-753000" exists ...
	I0722 03:39:37.247732    2324 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:39:37.247754    2324 addons.go:69] Setting default-storageclass=true in profile "functional-753000"
	I0722 03:39:37.247764    2324 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-753000"
	I0722 03:39:37.248676    2324 addons.go:234] Setting addon default-storageclass=true in "functional-753000"
	W0722 03:39:37.248679    2324 addons.go:243] addon default-storageclass should already be in state true
	I0722 03:39:37.248684    2324 host.go:66] Checking if "functional-753000" exists ...
	I0722 03:39:37.251780    2324 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 03:39:37.251783    2324 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 03:39:37.251788    2324 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0722 03:39:37.255695    2324 out.go:177] * Verifying Kubernetes components...
	I0722 03:39:37.258679    2324 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 03:39:37.262667    2324 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:39:37.268683    2324 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 03:39:37.268689    2324 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 03:39:37.268696    2324 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
	I0722 03:39:37.360070    2324 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:39:37.367304    2324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 03:39:37.367926    2324 node_ready.go:35] waiting up to 6m0s for node "functional-753000" to be "Ready" ...
	I0722 03:39:37.369458    2324 node_ready.go:49] node "functional-753000" has status "Ready":"True"
	I0722 03:39:37.369462    2324 node_ready.go:38] duration metric: took 1.529375ms for node "functional-753000" to be "Ready" ...
	I0722 03:39:37.369465    2324 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:39:37.372044    2324 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.403735    2324 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 03:39:37.694180    2324 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0722 03:39:37.702162    2324 addons.go:510] duration metric: took 454.494333ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0722 03:39:37.727296    2324 pod_ready.go:92] pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:37.727301    2324 pod_ready.go:81] duration metric: took 355.252917ms for pod "coredns-7db6d8ff4d-pt7q4" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:37.727304    2324 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:38.127407    2324 pod_ready.go:92] pod "etcd-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:38.127413    2324 pod_ready.go:81] duration metric: took 400.107542ms for pod "etcd-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:38.127417    2324 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:38.527469    2324 pod_ready.go:92] pod "kube-apiserver-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:38.527475    2324 pod_ready.go:81] duration metric: took 400.05625ms for pod "kube-apiserver-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:38.527479    2324 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:38.927268    2324 pod_ready.go:92] pod "kube-controller-manager-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:38.927272    2324 pod_ready.go:81] duration metric: took 399.790917ms for pod "kube-controller-manager-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:38.927276    2324 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s89cr" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:39.327421    2324 pod_ready.go:92] pod "kube-proxy-s89cr" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:39.327428    2324 pod_ready.go:81] duration metric: took 400.150208ms for pod "kube-proxy-s89cr" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:39.327431    2324 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:39.727555    2324 pod_ready.go:92] pod "kube-scheduler-functional-753000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:39:39.727560    2324 pod_ready.go:81] duration metric: took 400.127125ms for pod "kube-scheduler-functional-753000" in "kube-system" namespace to be "Ready" ...
	I0722 03:39:39.727563    2324 pod_ready.go:38] duration metric: took 2.358099917s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:39:39.727573    2324 api_server.go:52] waiting for apiserver process to appear ...
	I0722 03:39:39.727664    2324 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:39:39.733521    2324 api_server.go:72] duration metric: took 2.485859292s to wait for apiserver process to appear ...
	I0722 03:39:39.733527    2324 api_server.go:88] waiting for apiserver healthz status ...
	I0722 03:39:39.733534    2324 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0722 03:39:39.736166    2324 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0722 03:39:39.736805    2324 api_server.go:141] control plane version: v1.30.3
	I0722 03:39:39.736810    2324 api_server.go:131] duration metric: took 3.281208ms to wait for apiserver health ...
	I0722 03:39:39.736813    2324 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 03:39:39.928813    2324 system_pods.go:59] 7 kube-system pods found
	I0722 03:39:39.928819    2324 system_pods.go:61] "coredns-7db6d8ff4d-pt7q4" [8bf37bef-29e4-4c67-a7b7-77c015e7763b] Running
	I0722 03:39:39.928821    2324 system_pods.go:61] "etcd-functional-753000" [468471de-1030-42ed-bbf4-57f6c0dae6bd] Running
	I0722 03:39:39.928823    2324 system_pods.go:61] "kube-apiserver-functional-753000" [26f03245-1f14-4c59-b69c-5dbd093ca602] Running
	I0722 03:39:39.928824    2324 system_pods.go:61] "kube-controller-manager-functional-753000" [6f16e5fc-45c2-44b4-8b9f-72a79431e530] Running
	I0722 03:39:39.928825    2324 system_pods.go:61] "kube-proxy-s89cr" [b8581931-2d55-4193-83d8-22133887efb3] Running
	I0722 03:39:39.928827    2324 system_pods.go:61] "kube-scheduler-functional-753000" [8786b959-f435-4eca-b362-c2568e32d971] Running
	I0722 03:39:39.928827    2324 system_pods.go:61] "storage-provisioner" [bcabe724-4a68-4e41-b621-987f25a3ca6b] Running
	I0722 03:39:39.928829    2324 system_pods.go:74] duration metric: took 192.015375ms to wait for pod list to return data ...
	I0722 03:39:39.928832    2324 default_sa.go:34] waiting for default service account to be created ...
	I0722 03:39:40.127309    2324 default_sa.go:45] found service account: "default"
	I0722 03:39:40.127316    2324 default_sa.go:55] duration metric: took 198.482125ms for default service account to be created ...
	I0722 03:39:40.127320    2324 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 03:39:40.329207    2324 system_pods.go:86] 7 kube-system pods found
	I0722 03:39:40.329214    2324 system_pods.go:89] "coredns-7db6d8ff4d-pt7q4" [8bf37bef-29e4-4c67-a7b7-77c015e7763b] Running
	I0722 03:39:40.329217    2324 system_pods.go:89] "etcd-functional-753000" [468471de-1030-42ed-bbf4-57f6c0dae6bd] Running
	I0722 03:39:40.329220    2324 system_pods.go:89] "kube-apiserver-functional-753000" [26f03245-1f14-4c59-b69c-5dbd093ca602] Running
	I0722 03:39:40.329221    2324 system_pods.go:89] "kube-controller-manager-functional-753000" [6f16e5fc-45c2-44b4-8b9f-72a79431e530] Running
	I0722 03:39:40.329222    2324 system_pods.go:89] "kube-proxy-s89cr" [b8581931-2d55-4193-83d8-22133887efb3] Running
	I0722 03:39:40.329223    2324 system_pods.go:89] "kube-scheduler-functional-753000" [8786b959-f435-4eca-b362-c2568e32d971] Running
	I0722 03:39:40.329224    2324 system_pods.go:89] "storage-provisioner" [bcabe724-4a68-4e41-b621-987f25a3ca6b] Running
	I0722 03:39:40.329227    2324 system_pods.go:126] duration metric: took 201.905541ms to wait for k8s-apps to be running ...
	I0722 03:39:40.329230    2324 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 03:39:40.329306    2324 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 03:39:40.335187    2324 system_svc.go:56] duration metric: took 5.950541ms WaitForService to wait for kubelet
	I0722 03:39:40.335195    2324 kubeadm.go:582] duration metric: took 3.087535208s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:39:40.335204    2324 node_conditions.go:102] verifying NodePressure condition ...
	I0722 03:39:40.527747    2324 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:39:40.527752    2324 node_conditions.go:123] node cpu capacity is 2
	I0722 03:39:40.527757    2324 node_conditions.go:105] duration metric: took 192.550916ms to run NodePressure ...
	I0722 03:39:40.527762    2324 start.go:241] waiting for startup goroutines ...
	I0722 03:39:40.527766    2324 start.go:246] waiting for cluster config update ...
	I0722 03:39:40.527771    2324 start.go:255] writing updated cluster config ...
	I0722 03:39:40.528201    2324 ssh_runner.go:195] Run: rm -f paused
	I0722 03:39:40.558360    2324 start.go:600] kubectl: 1.29.2, cluster: 1.30.3 (minor skew: 1)
	I0722 03:39:40.562195    2324 out.go:177] * Done! kubectl is now configured to use "functional-753000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 22 10:40:18 functional-753000 dockerd[6211]: time="2024-07-22T10:40:18.438569859Z" level=warning msg="cleaning up after shim disconnected" id=7f92d10eae20272bc577206d0d752a72425a2467e6a666f465dbfd307d97cdf0 namespace=moby
	Jul 22 10:40:18 functional-753000 dockerd[6211]: time="2024-07-22T10:40:18.438585650Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.007775694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.007806984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.007820900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.007855064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:19 functional-753000 dockerd[6204]: time="2024-07-22T10:40:19.028846720Z" level=info msg="ignoring event" container=c2836a21d7467715b623ba6959f48de209fcae60028ff3445bde9443dde56abc module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.029057707Z" level=info msg="shim disconnected" id=c2836a21d7467715b623ba6959f48de209fcae60028ff3445bde9443dde56abc namespace=moby
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.029085289Z" level=warning msg="cleaning up after shim disconnected" id=c2836a21d7467715b623ba6959f48de209fcae60028ff3445bde9443dde56abc namespace=moby
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.029089206Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.854861351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.854909890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.855051881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:19 functional-753000 dockerd[6211]: time="2024-07-22T10:40:19.855108711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:19 functional-753000 cri-dockerd[6474]: time="2024-07-22T10:40:19Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c089ccd2be47874e2f4ebecc1d58561cf1dfe69bc1c0df31163c72c5071914c5/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 22 10:40:20 functional-753000 cri-dockerd[6474]: time="2024-07-22T10:40:20Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Jul 22 10:40:20 functional-753000 dockerd[6211]: time="2024-07-22T10:40:20.639389746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:40:20 functional-753000 dockerd[6211]: time="2024-07-22T10:40:20.639519780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:40:20 functional-753000 dockerd[6211]: time="2024-07-22T10:40:20.639548320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:20 functional-753000 dockerd[6211]: time="2024-07-22T10:40:20.639609899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:28 functional-753000 dockerd[6211]: time="2024-07-22T10:40:28.137449040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:40:28 functional-753000 dockerd[6211]: time="2024-07-22T10:40:28.137482121Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:40:28 functional-753000 dockerd[6211]: time="2024-07-22T10:40:28.137622696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:28 functional-753000 dockerd[6211]: time="2024-07-22T10:40:28.137675818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:40:28 functional-753000 cri-dockerd[6474]: time="2024-07-22T10:40:28Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4232c74eb14fdde6f3a89fd47ae0d858b59e8d7e0ab34f9377dbc1988ebb61cb/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a79666342a05b       nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df   9 seconds ago        Running             myfrontend                0                   c089ccd2be478       sp-pod
	c2836a21d7467       72565bf5bbedf                                                                   11 seconds ago       Exited              echoserver-arm            2                   32cc9624bb79f       hello-node-connect-6f49f58cd5-bgvlc
	dba29dc3fada5       72565bf5bbedf                                                                   22 seconds ago       Exited              echoserver-arm            2                   77baf7a88bffb       hello-node-65f5d5cc78-sh8vd
	cfafd10677215       nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55   32 seconds ago       Running             nginx                     0                   fabd9a42040b2       nginx-svc
	7b411d9ce98b7       2437cf7621777                                                                   About a minute ago   Running             coredns                   2                   23e5c3ca421e3       coredns-7db6d8ff4d-pt7q4
	be6bfa4f5276d       ba04bb24b9575                                                                   About a minute ago   Running             storage-provisioner       3                   5191d222326e6       storage-provisioner
	904861af44477       2351f570ed0ea                                                                   About a minute ago   Running             kube-proxy                2                   f5cf57adc9a05       kube-proxy-s89cr
	f3dc080974909       8e97cdb19e7cc                                                                   About a minute ago   Running             kube-controller-manager   2                   2d1694fda8167       kube-controller-manager-functional-753000
	3b63a72a4bf39       d48f992a22722                                                                   About a minute ago   Running             kube-scheduler            2                   ece429ae3f658       kube-scheduler-functional-753000
	b5009029475f1       014faa467e297                                                                   About a minute ago   Running             etcd                      2                   5735ce4266b19       etcd-functional-753000
	20683f5efdc5e       61773190d42ff                                                                   About a minute ago   Running             kube-apiserver            0                   002c8fbf0184f       kube-apiserver-functional-753000
	a5d96268847fb       ba04bb24b9575                                                                   2 minutes ago        Exited              storage-provisioner       2                   60e9d6f81ad2e       storage-provisioner
	da7f10269a755       2437cf7621777                                                                   2 minutes ago        Exited              coredns                   1                   1e68721f5bf61       coredns-7db6d8ff4d-pt7q4
	39acd047c3855       2351f570ed0ea                                                                   2 minutes ago        Exited              kube-proxy                1                   1ad2aab651368       kube-proxy-s89cr
	676a3c0270794       8e97cdb19e7cc                                                                   2 minutes ago        Exited              kube-controller-manager   1                   24fce56917d44       kube-controller-manager-functional-753000
	7f6e321decca8       d48f992a22722                                                                   2 minutes ago        Exited              kube-scheduler            1                   2513219ff8513       kube-scheduler-functional-753000
	5c4aed695eddc       014faa467e297                                                                   2 minutes ago        Exited              etcd                      1                   8e648f87155bf       etcd-functional-753000
	
	
	==> coredns [7b411d9ce98b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48991 - 25003 "HINFO IN 6016784612338794551.6607776839422765548. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.00985292s
	[INFO] 10.244.0.1:19898 - 54618 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000100619s
	[INFO] 10.244.0.1:14602 - 24479 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000119243s
	[INFO] 10.244.0.1:40012 - 50125 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000032706s
	[INFO] 10.244.0.1:45647 - 5806 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001111638s
	[INFO] 10.244.0.1:60583 - 60052 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.000052455s
	[INFO] 10.244.0.1:62625 - 63226 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000125867s
	
	
	==> coredns [da7f10269a75] <==
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:37833 - 10343 "HINFO IN 7066754621908275338.8788857704144222136. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.008986695s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[260872791]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:38:09.596) (total time: 30000ms):
	Trace[260872791]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:38:39.597)
	Trace[260872791]: [30.000348925s] [30.000348925s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[246760553]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:38:09.596) (total time: 30000ms):
	Trace[246760553]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:38:39.597)
	Trace[246760553]: [30.000413544s] [30.000413544s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1657262547]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:38:09.596) (total time: 30000ms):
	Trace[1657262547]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:38:39.597)
	Trace[1657262547]: [30.000415198s] [30.000415198s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-753000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-753000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=functional-753000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T03_37_33_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:37:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-753000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:40:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:40:25 +0000   Mon, 22 Jul 2024 10:37:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:40:25 +0000   Mon, 22 Jul 2024 10:37:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:40:25 +0000   Mon, 22 Jul 2024 10:37:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:40:25 +0000   Mon, 22 Jul 2024 10:37:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-753000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904740Ki
	  pods:               110
	System Info:
	  Machine ID:                 882c375d5ec34326bbfec8a9eb301a7f
	  System UUID:                882c375d5ec34326bbfec8a9eb301a7f
	  Boot ID:                    2f11582c-be08-4030-87c5-2636d5acd183
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  default                     hello-node-65f5d5cc78-sh8vd                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  default                     hello-node-connect-6f49f58cd5-bgvlc          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  default                     nginx-svc                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         38s
	  default                     sp-pod                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  kube-system                 coredns-7db6d8ff4d-pt7q4                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     2m42s
	  kube-system                 etcd-functional-753000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-apiserver-functional-753000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-controller-manager-functional-753000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 kube-proxy-s89cr                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 kube-scheduler-functional-753000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m57s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m41s                  kube-proxy       
	  Normal  Starting                 63s                    kube-proxy       
	  Normal  Starting                 2m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m57s                  kubelet          Node functional-753000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m57s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m57s                  kubelet          Node functional-753000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m57s                  kubelet          Node functional-753000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m57s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m53s                  kubelet          Node functional-753000 status is now: NodeReady
	  Normal  RegisteredNode           2m42s                  node-controller  Node functional-753000 event: Registered Node functional-753000 in Controller
	  Normal  Starting                 2m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node functional-753000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node functional-753000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m23s (x7 over 2m23s)  kubelet          Node functional-753000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m9s                   node-controller  Node functional-753000 event: Registered Node functional-753000 in Controller
	  Normal  Starting                 68s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node functional-753000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node functional-753000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x7 over 67s)      kubelet          Node functional-753000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  67s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           52s                    node-controller  Node functional-753000 event: Registered Node functional-753000 in Controller
	
	
	==> dmesg <==
	[  +3.400395] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.375358] kauditd_printk_skb: 32 callbacks suppressed
	[ +27.167135] systemd-fstab-generator[5271]: Ignoring "noauto" option for root device
	[Jul22 10:39] systemd-fstab-generator[5737]: Ignoring "noauto" option for root device
	[  +0.054828] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.100772] systemd-fstab-generator[5771]: Ignoring "noauto" option for root device
	[  +0.091475] systemd-fstab-generator[5783]: Ignoring "noauto" option for root device
	[  +0.098607] systemd-fstab-generator[5797]: Ignoring "noauto" option for root device
	[  +5.109963] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.381410] systemd-fstab-generator[6427]: Ignoring "noauto" option for root device
	[  +0.091004] systemd-fstab-generator[6439]: Ignoring "noauto" option for root device
	[  +0.095890] systemd-fstab-generator[6451]: Ignoring "noauto" option for root device
	[  +0.084364] systemd-fstab-generator[6466]: Ignoring "noauto" option for root device
	[  +0.199201] systemd-fstab-generator[6628]: Ignoring "noauto" option for root device
	[  +1.045985] systemd-fstab-generator[6752]: Ignoring "noauto" option for root device
	[  +3.396939] kauditd_printk_skb: 199 callbacks suppressed
	[ +11.743519] kauditd_printk_skb: 31 callbacks suppressed
	[  +0.296086] systemd-fstab-generator[7749]: Ignoring "noauto" option for root device
	[  +4.738010] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.509184] kauditd_printk_skb: 21 callbacks suppressed
	[  +6.839314] kauditd_printk_skb: 22 callbacks suppressed
	[Jul22 10:40] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.136296] kauditd_printk_skb: 32 callbacks suppressed
	[ +10.345419] kauditd_printk_skb: 1 callbacks suppressed
	[  +9.781528] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [5c4aed695edd] <==
	{"level":"info","ts":"2024-07-22T10:38:06.686406Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T10:38:07.678174Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-22T10:38:07.678246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-22T10:38:07.678268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-07-22T10:38:07.678293Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-07-22T10:38:07.678314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-22T10:38:07.67833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-07-22T10:38:07.678362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-22T10:38:07.680126Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-753000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T10:38:07.680202Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:38:07.680379Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T10:38:07.680416Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T10:38:07.68044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:38:07.68278Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T10:38:07.683054Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-22T10:39:07.96452Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-22T10:39:07.96455Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"functional-753000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-07-22T10:39:07.964603Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:39:07.964646Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:39:07.977708Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:39:07.977733Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T10:39:07.97888Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-07-22T10:39:07.980383Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-22T10:39:07.980418Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-22T10:39:07.980421Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"functional-753000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> etcd [b5009029475f] <==
	{"level":"info","ts":"2024-07-22T10:39:22.678037Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T10:39:22.678055Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-22T10:39:22.678944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-07-22T10:39:22.67899Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-07-22T10:39:22.679025Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T10:39:22.679211Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T10:39:22.679925Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T10:39:22.682435Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T10:39:22.682537Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T10:39:22.682217Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-22T10:39:22.682577Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-07-22T10:39:24.368346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-22T10:39:24.368545Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-22T10:39:24.368592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-07-22T10:39:24.368624Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-07-22T10:39:24.368641Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-22T10:39:24.368715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-07-22T10:39:24.368736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-07-22T10:39:24.371001Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:39:24.371443Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:39:24.371689Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T10:39:24.371717Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T10:39:24.371004Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-753000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T10:39:24.376673Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-07-22T10:39:24.379791Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 10:40:29 up 3 min,  0 users,  load average: 0.41, 0.33, 0.14
	Linux functional-753000 5.10.207 #1 SMP PREEMPT Thu Jul 18 19:24:21 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [20683f5efdc5] <==
	I0722 10:39:24.982644       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 10:39:24.982709       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0722 10:39:24.987450       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 10:39:24.996783       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 10:39:24.996800       1 aggregator.go:165] initial CRD sync complete...
	I0722 10:39:24.996805       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 10:39:24.996808       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 10:39:24.996810       1 cache.go:39] Caches are synced for autoregister controller
	I0722 10:39:25.024299       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:39:25.024311       1 policy_source.go:224] refreshing policies
	I0722 10:39:25.024901       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 10:39:25.025479       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 10:39:25.879861       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 10:39:26.051026       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0722 10:39:26.054630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0722 10:39:26.067103       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0722 10:39:26.074443       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 10:39:26.076389       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 10:39:37.021055       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 10:39:37.289776       1 controller.go:615] quota admission added evaluator for: endpoints
	I0722 10:39:42.053312       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.154.57"}
	I0722 10:39:47.519491       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0722 10:39:47.561243       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.100.8.167"}
	I0722 10:39:51.482735       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.72.251"}
	I0722 10:40:02.881681       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.102.108.140"}
	
	
	==> kube-controller-manager [676a3c027079] <==
	I0722 10:38:20.694617       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0722 10:38:20.695643       1 shared_informer.go:320] Caches are synced for crt configmap
	I0722 10:38:20.696781       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0722 10:38:20.696785       1 shared_informer.go:320] Caches are synced for endpoint
	I0722 10:38:20.699464       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0722 10:38:20.700969       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0722 10:38:20.707201       1 shared_informer.go:320] Caches are synced for stateful set
	I0722 10:38:20.708308       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0722 10:38:20.742992       1 shared_informer.go:320] Caches are synced for HPA
	I0722 10:38:20.744045       1 shared_informer.go:320] Caches are synced for persistent volume
	I0722 10:38:20.765071       1 shared_informer.go:320] Caches are synced for PV protection
	I0722 10:38:20.812734       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:38:20.840477       1 shared_informer.go:320] Caches are synced for daemon sets
	I0722 10:38:20.859204       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0722 10:38:20.868169       1 shared_informer.go:320] Caches are synced for taint
	I0722 10:38:20.868224       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0722 10:38:20.868286       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-753000"
	I0722 10:38:20.868346       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0722 10:38:20.890984       1 shared_informer.go:320] Caches are synced for attach detach
	I0722 10:38:20.905955       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:38:21.332893       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 10:38:21.398440       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 10:38:21.398497       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 10:38:47.187449       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="4.042084ms"
	I0722 10:38:47.187580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="17.288µs"
	
	
	==> kube-controller-manager [f3dc08097490] <==
	I0722 10:39:37.121495       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:39:37.134578       1 shared_informer.go:320] Caches are synced for HPA
	I0722 10:39:37.167498       1 shared_informer.go:320] Caches are synced for resource quota
	I0722 10:39:37.221840       1 shared_informer.go:320] Caches are synced for attach detach
	I0722 10:39:37.223052       1 shared_informer.go:320] Caches are synced for PV protection
	I0722 10:39:37.252904       1 shared_informer.go:320] Caches are synced for persistent volume
	I0722 10:39:37.675471       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 10:39:37.729797       1 shared_informer.go:320] Caches are synced for garbage collector
	I0722 10:39:37.729815       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0722 10:39:47.535265       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="13.240848ms"
	I0722 10:39:47.538596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="3.281286ms"
	I0722 10:39:47.538625       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="11.625µs"
	I0722 10:39:47.542785       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="12.541µs"
	I0722 10:39:55.228247       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="20.082µs"
	I0722 10:39:56.235000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="24.248µs"
	I0722 10:39:57.240925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="30.915µs"
	I0722 10:40:02.848579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="7.744932ms"
	I0722 10:40:02.856268       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="7.659187ms"
	I0722 10:40:02.856294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="10.832µs"
	I0722 10:40:04.312705       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="16.79µs"
	I0722 10:40:05.316378       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="18.041µs"
	I0722 10:40:06.320997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="23.207µs"
	I0722 10:40:08.332579       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="22.957µs"
	I0722 10:40:19.434528       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-6f49f58cd5" duration="24.374µs"
	I0722 10:40:19.992462       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-65f5d5cc78" duration="21.207µs"
	
	
	==> kube-proxy [39acd047c385] <==
	I0722 10:38:09.492574       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:38:09.497608       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0722 10:38:09.509425       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:38:09.509454       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:38:09.509464       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:38:09.510217       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:38:09.510287       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:38:09.510291       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:38:09.511941       1 config.go:192] "Starting service config controller"
	I0722 10:38:09.511949       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:38:09.511965       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:38:09.511967       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:38:09.519372       1 config.go:319] "Starting node config controller"
	I0722 10:38:09.519405       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:38:09.612239       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:38:09.612269       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:38:09.619508       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [904861af4447] <==
	I0722 10:39:25.610320       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:39:25.614068       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	I0722 10:39:25.621984       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:39:25.622003       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:39:25.622010       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:39:25.622723       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:39:25.622814       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:39:25.622822       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:39:25.623286       1 config.go:192] "Starting service config controller"
	I0722 10:39:25.623295       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:39:25.623319       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:39:25.623325       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:39:25.623533       1 config.go:319] "Starting node config controller"
	I0722 10:39:25.623687       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:39:25.724299       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:39:25.724301       1 shared_informer.go:320] Caches are synced for node config
	I0722 10:39:25.724312       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3b63a72a4bf3] <==
	I0722 10:39:22.834450       1 serving.go:380] Generated self-signed cert in-memory
	W0722 10:39:24.903579       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 10:39:24.903596       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:39:24.903600       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 10:39:24.903604       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 10:39:24.930983       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 10:39:24.931031       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:39:24.931793       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 10:39:24.932452       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 10:39:24.932470       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 10:39:24.934349       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 10:39:25.034989       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7f6e321decca] <==
	I0722 10:38:07.032622       1 serving.go:380] Generated self-signed cert in-memory
	W0722 10:38:08.243447       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0722 10:38:08.243487       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0722 10:38:08.243519       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0722 10:38:08.243526       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0722 10:38:08.264286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0722 10:38:08.264302       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:38:08.265097       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0722 10:38:08.265266       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0722 10:38:08.265275       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0722 10:38:08.265283       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 10:38:08.366274       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 10:39:07.963940       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Jul 22 10:40:18 functional-753000 kubelet[6759]: I0722 10:40:18.654436    6759 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-72k5k\" (UniqueName: \"kubernetes.io/projected/c44e1a4d-a410-4584-9858-da67c1c28674-kube-api-access-72k5k\") on node \"functional-753000\" DevicePath \"\""
	Jul 22 10:40:18 functional-753000 kubelet[6759]: I0722 10:40:18.654452    6759 reconciler_common.go:289] "Volume detached for volume \"pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e\" (UniqueName: \"kubernetes.io/host-path/c44e1a4d-a410-4584-9858-da67c1c28674-pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e\") on node \"functional-753000\" DevicePath \"\""
	Jul 22 10:40:18 functional-753000 kubelet[6759]: I0722 10:40:18.987555    6759 scope.go:117] "RemoveContainer" containerID="ef762ab68626c646398374a2622639c660a8efd3a2dd03041726d5e64814f491"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.428611    6759 scope.go:117] "RemoveContainer" containerID="ef762ab68626c646398374a2622639c660a8efd3a2dd03041726d5e64814f491"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.428755    6759 scope.go:117] "RemoveContainer" containerID="c2836a21d7467715b623ba6959f48de209fcae60028ff3445bde9443dde56abc"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: E0722 10:40:19.428839    6759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-6f49f58cd5-bgvlc_default(4bc78060-362a-46b3-a035-ec5f1389e976)\"" pod="default/hello-node-connect-6f49f58cd5-bgvlc" podUID="4bc78060-362a-46b3-a035-ec5f1389e976"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.438752    6759 scope.go:117] "RemoveContainer" containerID="e30c66b52a87a9c9264eb44a4a6af160b52cd07bd81c467e123ba5c0b6c6dab3"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.509042    6759 topology_manager.go:215] "Topology Admit Handler" podUID="860fcd9a-cde3-42f5-adda-4a09324b6e0b" podNamespace="default" podName="sp-pod"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: E0722 10:40:19.509078    6759 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c44e1a4d-a410-4584-9858-da67c1c28674" containerName="myfrontend"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.509096    6759 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44e1a4d-a410-4584-9858-da67c1c28674" containerName="myfrontend"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.561545    6759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e\" (UniqueName: \"kubernetes.io/host-path/860fcd9a-cde3-42f5-adda-4a09324b6e0b-pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e\") pod \"sp-pod\" (UID: \"860fcd9a-cde3-42f5-adda-4a09324b6e0b\") " pod="default/sp-pod"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.561568    6759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc89k\" (UniqueName: \"kubernetes.io/projected/860fcd9a-cde3-42f5-adda-4a09324b6e0b-kube-api-access-jc89k\") pod \"sp-pod\" (UID: \"860fcd9a-cde3-42f5-adda-4a09324b6e0b\") " pod="default/sp-pod"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.987374    6759 scope.go:117] "RemoveContainer" containerID="dba29dc3fada5f8567973e17ce7ce2e6c55cb807bfd4e8c072b53e618b653694"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: E0722 10:40:19.987463    6759 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-65f5d5cc78-sh8vd_default(0f82c872-47e4-4df3-8a25-f7ba42944f87)\"" pod="default/hello-node-65f5d5cc78-sh8vd" podUID="0f82c872-47e4-4df3-8a25-f7ba42944f87"
	Jul 22 10:40:19 functional-753000 kubelet[6759]: I0722 10:40:19.991656    6759 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44e1a4d-a410-4584-9858-da67c1c28674" path="/var/lib/kubelet/pods/c44e1a4d-a410-4584-9858-da67c1c28674/volumes"
	Jul 22 10:40:21 functional-753000 kubelet[6759]: E0722 10:40:21.992457    6759 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:40:21 functional-753000 kubelet[6759]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:40:21 functional-753000 kubelet[6759]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:40:21 functional-753000 kubelet[6759]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:40:21 functional-753000 kubelet[6759]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:40:22 functional-753000 kubelet[6759]: I0722 10:40:22.067260    6759 scope.go:117] "RemoveContainer" containerID="67a97afb49f2ff9bd00685b77c331b7b767357a7452bce23777c56ad61ceeec0"
	Jul 22 10:40:27 functional-753000 kubelet[6759]: I0722 10:40:27.810235    6759 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=8.103156382 podStartE2EDuration="8.810224011s" podCreationTimestamp="2024-07-22 10:40:19 +0000 UTC" firstStartedPulling="2024-07-22 10:40:19.908782237 +0000 UTC m=+57.976968706" lastFinishedPulling="2024-07-22 10:40:20.615849824 +0000 UTC m=+58.684036335" observedRunningTime="2024-07-22 10:40:21.458453666 +0000 UTC m=+59.526640176" watchObservedRunningTime="2024-07-22 10:40:27.810224011 +0000 UTC m=+65.878410522"
	Jul 22 10:40:27 functional-753000 kubelet[6759]: I0722 10:40:27.810397    6759 topology_manager.go:215] "Topology Admit Handler" podUID="5ffab46e-18e9-4276-b08f-1df8fb46981e" podNamespace="default" podName="busybox-mount"
	Jul 22 10:40:27 functional-753000 kubelet[6759]: I0722 10:40:27.901670    6759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk4zt\" (UniqueName: \"kubernetes.io/projected/5ffab46e-18e9-4276-b08f-1df8fb46981e-kube-api-access-bk4zt\") pod \"busybox-mount\" (UID: \"5ffab46e-18e9-4276-b08f-1df8fb46981e\") " pod="default/busybox-mount"
	Jul 22 10:40:27 functional-753000 kubelet[6759]: I0722 10:40:27.901691    6759 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/5ffab46e-18e9-4276-b08f-1df8fb46981e-test-volume\") pod \"busybox-mount\" (UID: \"5ffab46e-18e9-4276-b08f-1df8fb46981e\") " pod="default/busybox-mount"
	
	
	==> storage-provisioner [a5d96268847f] <==
	I0722 10:38:25.045741       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 10:38:25.049954       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 10:38:25.049972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 10:38:42.434909       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 10:38:42.434981       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-753000_06b35b81-da74-4c2d-a021-1b71fef352bc!
	I0722 10:38:42.435202       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc8ac273-21e4-4edf-9c6d-09e738dd90d5", APIVersion:"v1", ResourceVersion:"503", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-753000_06b35b81-da74-4c2d-a021-1b71fef352bc became leader
	I0722 10:38:42.535279       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-753000_06b35b81-da74-4c2d-a021-1b71fef352bc!
	
	
	==> storage-provisioner [be6bfa4f5276] <==
	I0722 10:39:25.570023       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 10:39:25.575451       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 10:39:25.575466       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 10:39:42.960938       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 10:39:42.961120       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc8ac273-21e4-4edf-9c6d-09e738dd90d5", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-753000_481ecb2a-e407-42a1-968c-0f63b540ceba became leader
	I0722 10:39:42.961144       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-753000_481ecb2a-e407-42a1-968c-0f63b540ceba!
	I0722 10:39:43.062850       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-753000_481ecb2a-e407-42a1-968c-0f63b540ceba!
	I0722 10:40:07.110290       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0722 10:40:07.110441       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d663c357-c784-4147-a295-76b2758d51aa 345 0 2024-07-22 10:37:47 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-07-22 10:37:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e &PersistentVolumeClaim{ObjectMeta:{myclaim  default  79fc6a20-d394-4eed-b508-1ec91ac36d6e 753 0 2024-07-22 10:40:07 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-07-22 10:40:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-07-22 10:40:07 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0722 10:40:07.110950       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e" provisioned
	I0722 10:40:07.110963       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0722 10:40:07.110967       1 volume_store.go:212] Trying to save persistentvolume "pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e"
	I0722 10:40:07.111752       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"79fc6a20-d394-4eed-b508-1ec91ac36d6e", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0722 10:40:07.117651       1 volume_store.go:219] persistentvolume "pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e" saved
	I0722 10:40:07.118885       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"79fc6a20-d394-4eed-b508-1ec91ac36d6e", APIVersion:"v1", ResourceVersion:"753", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-79fc6a20-d394-4eed-b508-1ec91ac36d6e
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-753000 -n functional-753000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-753000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-753000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-753000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-753000/192.168.105.4
	Start Time:       Mon, 22 Jul 2024 03:40:27 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bk4zt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-bk4zt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/busybox-mount to functional-753000
	  Normal  Pulling    1s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (27.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (312.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-248000 node stop m02 -v=7 --alsologtostderr: (12.173133958s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr
E0722 03:47:31.399253    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:47:48.152060    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:49:47.537890    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr: exit status 7 (3m45.044856s)

                                                
                                                
-- stdout --
	ha-248000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-248000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-248000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-248000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:46:28.234420    2955 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:46:28.234589    2955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:46:28.234594    2955 out.go:304] Setting ErrFile to fd 2...
	I0722 03:46:28.234596    2955 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:46:28.234723    2955 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:46:28.234872    2955 out.go:298] Setting JSON to false
	I0722 03:46:28.234885    2955 mustload.go:65] Loading cluster: ha-248000
	I0722 03:46:28.234958    2955 notify.go:220] Checking for updates...
	I0722 03:46:28.235081    2955 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:46:28.235089    2955 status.go:255] checking status of ha-248000 ...
	I0722 03:46:28.235879    2955 status.go:330] ha-248000 host status = "Running" (err=<nil>)
	I0722 03:46:28.235891    2955 host.go:66] Checking if "ha-248000" exists ...
	I0722 03:46:28.235988    2955 host.go:66] Checking if "ha-248000" exists ...
	I0722 03:46:28.236103    2955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:46:28.236112    2955 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/id_rsa Username:docker}
	W0722 03:47:43.237956    2955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0722 03:47:43.238050    2955 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0722 03:47:43.238059    2955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0722 03:47:43.238063    2955 status.go:257] ha-248000 status: &{Name:ha-248000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 03:47:43.238074    2955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0722 03:47:43.238081    2955 status.go:255] checking status of ha-248000-m02 ...
	I0722 03:47:43.238314    2955 status.go:330] ha-248000-m02 host status = "Stopped" (err=<nil>)
	I0722 03:47:43.238320    2955 status.go:343] host is not running, skipping remaining checks
	I0722 03:47:43.238322    2955 status.go:257] ha-248000-m02 status: &{Name:ha-248000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 03:47:43.238327    2955 status.go:255] checking status of ha-248000-m03 ...
	I0722 03:47:43.238933    2955 status.go:330] ha-248000-m03 host status = "Running" (err=<nil>)
	I0722 03:47:43.238939    2955 host.go:66] Checking if "ha-248000-m03" exists ...
	I0722 03:47:43.239051    2955 host.go:66] Checking if "ha-248000-m03" exists ...
	I0722 03:47:43.239189    2955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:47:43.239195    2955 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m03/id_rsa Username:docker}
	W0722 03:48:58.240621    2955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0722 03:48:58.240666    2955 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0722 03:48:58.240679    2955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0722 03:48:58.240683    2955 status.go:257] ha-248000-m03 status: &{Name:ha-248000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 03:48:58.240692    2955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0722 03:48:58.240697    2955 status.go:255] checking status of ha-248000-m04 ...
	I0722 03:48:58.241409    2955 status.go:330] ha-248000-m04 host status = "Running" (err=<nil>)
	I0722 03:48:58.241417    2955 host.go:66] Checking if "ha-248000-m04" exists ...
	I0722 03:48:58.241533    2955 host.go:66] Checking if "ha-248000-m04" exists ...
	I0722 03:48:58.241652    2955 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:48:58.241658    2955 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m04/id_rsa Username:docker}
	W0722 03:50:13.243557    2955 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0722 03:50:13.243745    2955 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0722 03:50:13.243791    2955 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0722 03:50:13.243814    2955 status.go:257] ha-248000-m04 status: &{Name:ha-248000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0722 03:50:13.243854    2955 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr": ha-248000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-248000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-248000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-248000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr": ha-248000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-248000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-248000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-248000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr": ha-248000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-248000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-248000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-248000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000
E0722 03:50:15.241228    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000: exit status 3 (1m15.078799834s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 03:51:28.323733    2976 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0722 03:51:28.323790    2976 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-248000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (312.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0722 03:52:48.152122    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.0985415s)
ha_test.go:413: expected profile "ha-248000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-248000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-248000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-248000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000
E0722 03:54:11.221954    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:54:47.537224    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000: exit status 3 (1m15.042170667s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 03:55:13.463083    3012 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0722 03:55:13.463126    3012 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-248000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (225.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (305.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-248000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.139786167s)

                                                
                                                
-- stdout --
	* Starting "ha-248000-m02" control-plane node in "ha-248000" cluster
	* Restarting existing qemu2 VM for "ha-248000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-248000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:55:13.539251    3026 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:55:13.539748    3026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:13.539754    3026 out.go:304] Setting ErrFile to fd 2...
	I0722 03:55:13.539758    3026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:13.539964    3026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:55:13.540283    3026 mustload.go:65] Loading cluster: ha-248000
	I0722 03:55:13.540586    3026 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0722 03:55:13.540909    3026 host.go:58] "ha-248000-m02" host status: Stopped
	I0722 03:55:13.545238    3026 out.go:177] * Starting "ha-248000-m02" control-plane node in "ha-248000" cluster
	I0722 03:55:13.548130    3026 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:13.548150    3026 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 03:55:13.548166    3026 cache.go:56] Caching tarball of preloaded images
	I0722 03:55:13.548321    3026 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 03:55:13.548368    3026 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:55:13.548478    3026 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/ha-248000/config.json ...
	I0722 03:55:13.551156    3026 start.go:360] acquireMachinesLock for ha-248000-m02: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:13.551214    3026 start.go:364] duration metric: took 38.334µs to acquireMachinesLock for "ha-248000-m02"
	I0722 03:55:13.551223    3026 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:13.551229    3026 fix.go:54] fixHost starting: m02
	I0722 03:55:13.551365    3026 fix.go:112] recreateIfNeeded on ha-248000-m02: state=Stopped err=<nil>
	W0722 03:55:13.551372    3026 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:13.556181    3026 out.go:177] * Restarting existing qemu2 VM for "ha-248000-m02" ...
	I0722 03:55:13.560122    3026 qemu.go:418] Using hvf for hardware acceleration
	I0722 03:55:13.560181    3026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:75:a6:35:f1:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/disk.qcow2
	I0722 03:55:13.563475    3026 main.go:141] libmachine: STDOUT: 
	I0722 03:55:13.563499    3026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 03:55:13.563537    3026 fix.go:56] duration metric: took 12.308542ms for fixHost
	I0722 03:55:13.563541    3026 start.go:83] releasing machines lock for "ha-248000-m02", held for 12.322042ms
	W0722 03:55:13.563550    3026 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 03:55:13.563611    3026 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 03:55:13.563617    3026 start.go:729] Will try again in 5 seconds ...
	I0722 03:55:18.565742    3026 start.go:360] acquireMachinesLock for ha-248000-m02: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:18.566145    3026 start.go:364] duration metric: took 303.875µs to acquireMachinesLock for "ha-248000-m02"
	I0722 03:55:18.566298    3026 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:18.566316    3026 fix.go:54] fixHost starting: m02
	I0722 03:55:18.567014    3026 fix.go:112] recreateIfNeeded on ha-248000-m02: state=Stopped err=<nil>
	W0722 03:55:18.567041    3026 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:18.571867    3026 out.go:177] * Restarting existing qemu2 VM for "ha-248000-m02" ...
	I0722 03:55:18.575959    3026 qemu.go:418] Using hvf for hardware acceleration
	I0722 03:55:18.576183    3026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:75:a6:35:f1:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/disk.qcow2
	I0722 03:55:18.583777    3026 main.go:141] libmachine: STDOUT: 
	I0722 03:55:18.583923    3026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 03:55:18.584001    3026 fix.go:56] duration metric: took 17.686875ms for fixHost
	I0722 03:55:18.584014    3026 start.go:83] releasing machines lock for "ha-248000-m02", held for 17.831375ms
	W0722 03:55:18.584185    3026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-248000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-248000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 03:55:18.588975    3026 out.go:177] 
	W0722 03:55:18.591987    3026 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 03:55:18.592027    3026 out.go:239] * 
	* 
	W0722 03:55:18.598878    3026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 03:55:18.603939    3026 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0722 03:55:13.539251    3026 out.go:291] Setting OutFile to fd 1 ...
I0722 03:55:13.539748    3026 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:55:13.539754    3026 out.go:304] Setting ErrFile to fd 2...
I0722 03:55:13.539758    3026 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:55:13.539964    3026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
I0722 03:55:13.540283    3026 mustload.go:65] Loading cluster: ha-248000
I0722 03:55:13.540586    3026 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
W0722 03:55:13.540909    3026 host.go:58] "ha-248000-m02" host status: Stopped
I0722 03:55:13.545238    3026 out.go:177] * Starting "ha-248000-m02" control-plane node in "ha-248000" cluster
I0722 03:55:13.548130    3026 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
I0722 03:55:13.548150    3026 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
I0722 03:55:13.548166    3026 cache.go:56] Caching tarball of preloaded images
I0722 03:55:13.548321    3026 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0722 03:55:13.548368    3026 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
I0722 03:55:13.548478    3026 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/ha-248000/config.json ...
I0722 03:55:13.551156    3026 start.go:360] acquireMachinesLock for ha-248000-m02: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0722 03:55:13.551214    3026 start.go:364] duration metric: took 38.334µs to acquireMachinesLock for "ha-248000-m02"
I0722 03:55:13.551223    3026 start.go:96] Skipping create...Using existing machine configuration
I0722 03:55:13.551229    3026 fix.go:54] fixHost starting: m02
I0722 03:55:13.551365    3026 fix.go:112] recreateIfNeeded on ha-248000-m02: state=Stopped err=<nil>
W0722 03:55:13.551372    3026 fix.go:138] unexpected machine state, will restart: <nil>
I0722 03:55:13.556181    3026 out.go:177] * Restarting existing qemu2 VM for "ha-248000-m02" ...
I0722 03:55:13.560122    3026 qemu.go:418] Using hvf for hardware acceleration
I0722 03:55:13.560181    3026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:75:a6:35:f1:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/disk.qcow2
I0722 03:55:13.563475    3026 main.go:141] libmachine: STDOUT: 
I0722 03:55:13.563499    3026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0722 03:55:13.563537    3026 fix.go:56] duration metric: took 12.308542ms for fixHost
I0722 03:55:13.563541    3026 start.go:83] releasing machines lock for "ha-248000-m02", held for 12.322042ms
W0722 03:55:13.563550    3026 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0722 03:55:13.563611    3026 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0722 03:55:13.563617    3026 start.go:729] Will try again in 5 seconds ...
I0722 03:55:18.565742    3026 start.go:360] acquireMachinesLock for ha-248000-m02: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0722 03:55:18.566145    3026 start.go:364] duration metric: took 303.875µs to acquireMachinesLock for "ha-248000-m02"
I0722 03:55:18.566298    3026 start.go:96] Skipping create...Using existing machine configuration
I0722 03:55:18.566316    3026 fix.go:54] fixHost starting: m02
I0722 03:55:18.567014    3026 fix.go:112] recreateIfNeeded on ha-248000-m02: state=Stopped err=<nil>
W0722 03:55:18.567041    3026 fix.go:138] unexpected machine state, will restart: <nil>
I0722 03:55:18.571867    3026 out.go:177] * Restarting existing qemu2 VM for "ha-248000-m02" ...
I0722 03:55:18.575959    3026 qemu.go:418] Using hvf for hardware acceleration
I0722 03:55:18.576183    3026 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:75:a6:35:f1:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m02/disk.qcow2
I0722 03:55:18.583777    3026 main.go:141] libmachine: STDOUT: 
I0722 03:55:18.583923    3026 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0722 03:55:18.584001    3026 fix.go:56] duration metric: took 17.686875ms for fixHost
I0722 03:55:18.584014    3026 start.go:83] releasing machines lock for "ha-248000-m02", held for 17.831375ms
W0722 03:55:18.584185    3026 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-248000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-248000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0722 03:55:18.588975    3026 out.go:177] 
W0722 03:55:18.591987    3026 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0722 03:55:18.592027    3026 out.go:239] * 
* 
W0722 03:55:18.598878    3026 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0722 03:55:18.603939    3026 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-248000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr
E0722 03:57:48.114189    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr: exit status 7 (3m45.069140833s)

                                                
                                                
-- stdout --
	ha-248000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-248000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-248000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-248000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:55:18.662145    3032 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:55:18.662324    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:18.662332    3032 out.go:304] Setting ErrFile to fd 2...
	I0722 03:55:18.662335    3032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:18.662505    3032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:55:18.662652    3032 out.go:298] Setting JSON to false
	I0722 03:55:18.662671    3032 mustload.go:65] Loading cluster: ha-248000
	I0722 03:55:18.662709    3032 notify.go:220] Checking for updates...
	I0722 03:55:18.662928    3032 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:18.662938    3032 status.go:255] checking status of ha-248000 ...
	I0722 03:55:18.663785    3032 status.go:330] ha-248000 host status = "Running" (err=<nil>)
	I0722 03:55:18.663798    3032 host.go:66] Checking if "ha-248000" exists ...
	I0722 03:55:18.663936    3032 host.go:66] Checking if "ha-248000" exists ...
	I0722 03:55:18.664081    3032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:55:18.664090    3032 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/id_rsa Username:docker}
	W0722 03:56:33.666509    3032 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0722 03:56:33.666931    3032 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0722 03:56:33.666996    3032 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0722 03:56:33.667018    3032 status.go:257] ha-248000 status: &{Name:ha-248000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 03:56:33.667086    3032 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0722 03:56:33.667109    3032 status.go:255] checking status of ha-248000-m02 ...
	I0722 03:56:33.668091    3032 status.go:330] ha-248000-m02 host status = "Stopped" (err=<nil>)
	I0722 03:56:33.668112    3032 status.go:343] host is not running, skipping remaining checks
	I0722 03:56:33.668125    3032 status.go:257] ha-248000-m02 status: &{Name:ha-248000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 03:56:33.668148    3032 status.go:255] checking status of ha-248000-m03 ...
	I0722 03:56:33.670334    3032 status.go:330] ha-248000-m03 host status = "Running" (err=<nil>)
	I0722 03:56:33.670357    3032 host.go:66] Checking if "ha-248000-m03" exists ...
	I0722 03:56:33.670822    3032 host.go:66] Checking if "ha-248000-m03" exists ...
	I0722 03:56:33.671302    3032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:56:33.671327    3032 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m03/id_rsa Username:docker}
	W0722 03:57:48.634169    3032 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0722 03:57:48.634360    3032 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0722 03:57:48.634401    3032 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0722 03:57:48.634420    3032 status.go:257] ha-248000-m03 status: &{Name:ha-248000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0722 03:57:48.634466    3032 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0722 03:57:48.634484    3032 status.go:255] checking status of ha-248000-m04 ...
	I0722 03:57:48.637403    3032 status.go:330] ha-248000-m04 host status = "Running" (err=<nil>)
	I0722 03:57:48.637431    3032 host.go:66] Checking if "ha-248000-m04" exists ...
	I0722 03:57:48.637974    3032 host.go:66] Checking if "ha-248000-m04" exists ...
	I0722 03:57:48.638637    3032 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:57:48.638662    3032 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m04/id_rsa Username:docker}
	W0722 03:59:03.639335    3032 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0722 03:59:03.639378    3032 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0722 03:59:03.639386    3032 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0722 03:59:03.639390    3032 status.go:257] ha-248000-m04 status: &{Name:ha-248000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0722 03:59:03.639399    3032 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000
E0722 03:59:47.497171    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000: exit status 3 (1m15.040340666s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:00:18.675093    3073 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0722 04:00:18.675122    3073 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-248000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (305.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-248000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-248000 -v=7 --alsologtostderr
E0722 04:04:47.493512    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 04:07:48.106074    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-248000 -v=7 --alsologtostderr: (5m27.211748208s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-248000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-248000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.250686542s)

                                                
                                                
-- stdout --
	* [ha-248000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-248000" primary control-plane node in "ha-248000" cluster
	* Restarting existing qemu2 VM for "ha-248000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-248000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:08:16.095111    3518 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:08:16.095310    3518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:16.095315    3518 out.go:304] Setting ErrFile to fd 2...
	I0722 04:08:16.095318    3518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:16.095509    3518 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:08:16.096755    3518 out.go:298] Setting JSON to false
	I0722 04:08:16.117723    3518 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4065,"bootTime":1721642431,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:08:16.117808    3518 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:08:16.123668    3518 out.go:177] * [ha-248000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:08:16.130666    3518 notify.go:220] Checking for updates...
	I0722 04:08:16.135681    3518 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:08:16.139620    3518 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:08:16.144882    3518 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:08:16.151602    3518 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:08:16.155502    3518 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:08:16.162624    3518 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:08:16.166861    3518 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:08:16.166925    3518 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:08:16.170640    3518 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:08:16.178621    3518 start.go:297] selected driver: qemu2
	I0722 04:08:16.178630    3518 start.go:901] validating driver "qemu2" against &{Name:ha-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.30.3 ClusterName:ha-248000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:08:16.178702    3518 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:08:16.181544    3518 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:08:16.181582    3518 cni.go:84] Creating CNI manager for ""
	I0722 04:08:16.181587    3518 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0722 04:08:16.181637    3518 start.go:340] cluster config:
	{Name:ha-248000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-248000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:08:16.185821    3518 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:08:16.198663    3518 out.go:177] * Starting "ha-248000" primary control-plane node in "ha-248000" cluster
	I0722 04:08:16.203745    3518 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:08:16.203775    3518 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:08:16.203791    3518 cache.go:56] Caching tarball of preloaded images
	I0722 04:08:16.203875    3518 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:08:16.203881    3518 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:08:16.203980    3518 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/ha-248000/config.json ...
	I0722 04:08:16.204413    3518 start.go:360] acquireMachinesLock for ha-248000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:08:16.204455    3518 start.go:364] duration metric: took 34.209µs to acquireMachinesLock for "ha-248000"
	I0722 04:08:16.204464    3518 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:08:16.204470    3518 fix.go:54] fixHost starting: 
	I0722 04:08:16.204595    3518 fix.go:112] recreateIfNeeded on ha-248000: state=Stopped err=<nil>
	W0722 04:08:16.204604    3518 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:08:16.209657    3518 out.go:177] * Restarting existing qemu2 VM for "ha-248000" ...
	I0722 04:08:16.217656    3518 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:08:16.217706    3518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ab:e3:bb:52:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/disk.qcow2
	I0722 04:08:16.220497    3518 main.go:141] libmachine: STDOUT: 
	I0722 04:08:16.220517    3518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:08:16.220545    3518 fix.go:56] duration metric: took 16.075334ms for fixHost
	I0722 04:08:16.220549    3518 start.go:83] releasing machines lock for "ha-248000", held for 16.089584ms
	W0722 04:08:16.220557    3518 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:08:16.220591    3518 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:08:16.220596    3518 start.go:729] Will try again in 5 seconds ...
	I0722 04:08:21.222762    3518 start.go:360] acquireMachinesLock for ha-248000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:08:21.223154    3518 start.go:364] duration metric: took 277.542µs to acquireMachinesLock for "ha-248000"
	I0722 04:08:21.223251    3518 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:08:21.223268    3518 fix.go:54] fixHost starting: 
	I0722 04:08:21.223955    3518 fix.go:112] recreateIfNeeded on ha-248000: state=Stopped err=<nil>
	W0722 04:08:21.223981    3518 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:08:21.229468    3518 out.go:177] * Restarting existing qemu2 VM for "ha-248000" ...
	I0722 04:08:21.237343    3518 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:08:21.237643    3518 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ab:e3:bb:52:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000/disk.qcow2
	I0722 04:08:21.246970    3518 main.go:141] libmachine: STDOUT: 
	I0722 04:08:21.247069    3518 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:08:21.247153    3518 fix.go:56] duration metric: took 23.8835ms for fixHost
	I0722 04:08:21.247171    3518 start.go:83] releasing machines lock for "ha-248000", held for 23.991625ms
	W0722 04:08:21.247399    3518 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-248000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-248000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:08:21.256363    3518 out.go:177] 
	W0722 04:08:21.260524    3518 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:08:21.260560    3518 out.go:239] * 
	* 
	W0722 04:08:21.262954    3518 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:08:21.272407    3518 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-248000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-248000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000: exit status 7 (32.388791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-248000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (332.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-248000 node delete m03 -v=7 --alsologtostderr: exit status 83 (46.862125ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-248000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-248000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:08:21.410601    3531 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:08:21.410817    3531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:21.410821    3531 out.go:304] Setting ErrFile to fd 2...
	I0722 04:08:21.410823    3531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:21.410956    3531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:08:21.411174    3531 mustload.go:65] Loading cluster: ha-248000
	I0722 04:08:21.411396    3531 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	W0722 04:08:21.411698    3531 out.go:239] ! The control-plane node ha-248000 host is not running (will try others): state=Stopped
	! The control-plane node ha-248000 host is not running (will try others): state=Stopped
	W0722 04:08:21.411811    3531 out.go:239] ! The control-plane node ha-248000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-248000-m02 host is not running (will try others): state=Stopped
	I0722 04:08:21.416387    3531 out.go:177] * The control-plane node ha-248000-m03 host is not running: state=Stopped
	I0722 04:08:21.424304    3531 out.go:177]   To start a cluster, run: "minikube start -p ha-248000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-248000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr: exit status 7 (28.922208ms)

                                                
                                                
-- stdout --
	ha-248000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-248000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-248000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-248000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:08:21.458273    3533 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:08:21.458398    3533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:21.458402    3533 out.go:304] Setting ErrFile to fd 2...
	I0722 04:08:21.458404    3533 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:21.458520    3533 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:08:21.458635    3533 out.go:298] Setting JSON to false
	I0722 04:08:21.458645    3533 mustload.go:65] Loading cluster: ha-248000
	I0722 04:08:21.458690    3533 notify.go:220] Checking for updates...
	I0722 04:08:21.458856    3533 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:08:21.458862    3533 status.go:255] checking status of ha-248000 ...
	I0722 04:08:21.459088    3533 status.go:330] ha-248000 host status = "Stopped" (err=<nil>)
	I0722 04:08:21.459092    3533 status.go:343] host is not running, skipping remaining checks
	I0722 04:08:21.459094    3533 status.go:257] ha-248000 status: &{Name:ha-248000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:08:21.459104    3533 status.go:255] checking status of ha-248000-m02 ...
	I0722 04:08:21.459201    3533 status.go:330] ha-248000-m02 host status = "Stopped" (err=<nil>)
	I0722 04:08:21.459204    3533 status.go:343] host is not running, skipping remaining checks
	I0722 04:08:21.459206    3533 status.go:257] ha-248000-m02 status: &{Name:ha-248000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:08:21.459210    3533 status.go:255] checking status of ha-248000-m03 ...
	I0722 04:08:21.459303    3533 status.go:330] ha-248000-m03 host status = "Stopped" (err=<nil>)
	I0722 04:08:21.459305    3533 status.go:343] host is not running, skipping remaining checks
	I0722 04:08:21.459307    3533 status.go:257] ha-248000-m03 status: &{Name:ha-248000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:08:21.459311    3533 status.go:255] checking status of ha-248000-m04 ...
	I0722 04:08:21.459411    3533 status.go:330] ha-248000-m04 host status = "Stopped" (err=<nil>)
	I0722 04:08:21.459413    3533 status.go:343] host is not running, skipping remaining checks
	I0722 04:08:21.459415    3533 status.go:257] ha-248000-m04 status: &{Name:ha-248000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000: exit status 7 (28.844584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-248000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-248000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-248000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-248000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-248000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000: exit status 7 (28.632833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-248000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (143.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 stop -v=7 --alsologtostderr
E0722 04:09:47.489997    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-248000 stop -v=7 --alsologtostderr: signal: killed (2m23.843539416s)

                                                
                                                
-- stdout --
	* Stopping node "ha-248000-m04"  ...
	* Stopping node "ha-248000-m03"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:08:21.591653    3542 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:08:21.591780    3542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:21.591784    3542 out.go:304] Setting ErrFile to fd 2...
	I0722 04:08:21.591786    3542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:08:21.591924    3542 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:08:21.592125    3542 out.go:298] Setting JSON to false
	I0722 04:08:21.592218    3542 mustload.go:65] Loading cluster: ha-248000
	I0722 04:08:21.592408    3542 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:08:21.592458    3542 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/ha-248000/config.json ...
	I0722 04:08:21.592688    3542 mustload.go:65] Loading cluster: ha-248000
	I0722 04:08:21.592771    3542 config.go:182] Loaded profile config "ha-248000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:08:21.592790    3542 stop.go:39] StopHost: ha-248000-m04
	I0722 04:08:21.597360    3542 out.go:177] * Stopping node "ha-248000-m04"  ...
	I0722 04:08:21.604263    3542 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 04:08:21.604297    3542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 04:08:21.604305    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m04/id_rsa Username:docker}
	W0722 04:09:36.605676    3542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0722 04:09:36.605925    3542 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0722 04:09:36.606095    3542 main.go:141] libmachine: Stopping "ha-248000-m04"...
	I0722 04:09:36.606199    3542 stop.go:66] stop err: Machine "ha-248000-m04" is already stopped.
	I0722 04:09:36.606245    3542 stop.go:69] host is already stopped
	I0722 04:09:36.606270    3542 stop.go:39] StopHost: ha-248000-m03
	I0722 04:09:36.623408    3542 out.go:177] * Stopping node "ha-248000-m03"  ...
	I0722 04:09:36.633432    3542 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0722 04:09:36.633727    3542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0722 04:09:36.633834    3542 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/ha-248000-m03/id_rsa Username:docker}

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-darwin-arm64 -p ha-248000 stop -v=7 --alsologtostderr": signal: killed
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr: context deadline exceeded (2.541µs)
ha_test.go:540: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-248000 -n ha-248000: exit status 7 (67.473084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-248000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (143.91s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-743000 --driver=qemu2 
E0722 04:10:51.176122    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-743000 --driver=qemu2 : exit status 80 (9.783378208s)

                                                
                                                
-- stdout --
	* [image-743000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-743000" primary control-plane node in "image-743000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-743000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-743000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-743000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-743000 -n image-743000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-743000 -n image-743000: exit status 7 (66.982958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-743000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.69s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-398000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-398000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.69222225s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bc598511-4bbd-4df3-bcff-26d3c8a21971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-398000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"112f26a3-7510-4ed5-a06c-be7ae162af3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19313"}}
	{"specversion":"1.0","id":"4c86e64e-bead-44b7-a06a-af996e1870a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig"}}
	{"specversion":"1.0","id":"805ee7d9-d526-4f8c-bc5b-f8cc72534ebb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"851be0d3-1e60-49e6-9a33-bdc69159b670","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"266a1433-15df-4be5-989a-bea47e32c007","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube"}}
	{"specversion":"1.0","id":"aeeb2760-805c-47c8-8a14-2b2473e26633","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b19e81f1-8c2f-4c9c-9445-ef1c2af31608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"984fa4a1-a4d4-41e7-b136-f846e53ce037","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"0fc766bf-d7e2-4e61-a88d-7c320850ab5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-398000\" primary control-plane node in \"json-output-398000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"08eb3352-a367-4e68-9e78-d3944ec2d9b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"157ac65d-cbea-4d7a-be61-46b38abd8692","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-398000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"06abe9a8-67b7-4857-9a34-fde8ec95d3fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"68d1625c-b03d-4aee-8aea-61d46ea7777b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"8d31b5cc-4f8d-46a4-8130-3727b597fc00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-398000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ea844745-dba6-4819-b682-1ecfe7e57dac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"c60af09e-a235-4d08-86df-eeea2404941e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-398000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-398000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-398000 --output=json --user=testUser: exit status 83 (75.535334ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ecb61133-6a84-4de5-8ea2-7f60af85e8e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-398000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"55cf672e-5397-4794-a733-baf46cb8ccd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-398000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-398000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-398000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-398000 --output=json --user=testUser: exit status 83 (43.974625ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-398000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-398000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-398000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-398000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-852000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-852000 --driver=qemu2 : exit status 80 (9.740784042s)

                                                
                                                
-- stdout --
	* [first-852000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-852000" primary control-plane node in "first-852000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-852000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-852000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-852000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-22 04:11:19.219124 -0700 PDT m=+2587.490947209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-854000 -n second-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-854000 -n second-854000: exit status 85 (75.408667ms)

                                                
                                                
-- stdout --
	* Profile "second-854000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-854000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-854000" host is not running, skipping log retrieval (state="* Profile \"second-854000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-854000\"")
helpers_test.go:175: Cleaning up "second-854000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-854000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-07-22 04:11:19.404611 -0700 PDT m=+2587.676435501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-852000 -n first-852000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-852000 -n first-852000: exit status 7 (28.020375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-852000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-852000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-852000
--- FAIL: TestMinikubeProfile (10.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.84s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-558000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-558000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.77220825s)

                                                
                                                
-- stdout --
	* [mount-start-1-558000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-558000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-558000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-558000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-558000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-558000 -n mount-start-1-558000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-558000 -n mount-start-1-558000: exit status 7 (68.691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-558000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-941000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-941000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.782821958s)

                                                
                                                
-- stdout --
	* [multinode-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-941000" primary control-plane node in "multinode-941000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-941000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:11:29.543638    3728 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:11:29.543767    3728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:11:29.543772    3728 out.go:304] Setting ErrFile to fd 2...
	I0722 04:11:29.543774    3728 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:11:29.543899    3728 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:11:29.544973    3728 out.go:298] Setting JSON to false
	I0722 04:11:29.560989    3728 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4258,"bootTime":1721642431,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:11:29.561058    3728 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:11:29.567222    3728 out.go:177] * [multinode-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:11:29.574173    3728 notify.go:220] Checking for updates...
	I0722 04:11:29.578061    3728 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:11:29.582042    3728 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:11:29.590063    3728 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:11:29.598097    3728 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:11:29.606082    3728 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:11:29.614051    3728 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:11:29.618238    3728 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:11:29.621897    3728 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:11:29.629099    3728 start.go:297] selected driver: qemu2
	I0722 04:11:29.629111    3728 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:11:29.629129    3728 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:11:29.631575    3728 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:11:29.635066    3728 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:11:29.639135    3728 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:11:29.639165    3728 cni.go:84] Creating CNI manager for ""
	I0722 04:11:29.639169    3728 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0722 04:11:29.639173    3728 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 04:11:29.639202    3728 start.go:340] cluster config:
	{Name:multinode-941000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:11:29.643110    3728 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:11:29.650174    3728 out.go:177] * Starting "multinode-941000" primary control-plane node in "multinode-941000" cluster
	I0722 04:11:29.654100    3728 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:11:29.654138    3728 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:11:29.654157    3728 cache.go:56] Caching tarball of preloaded images
	I0722 04:11:29.654271    3728 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:11:29.654279    3728 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:11:29.654514    3728 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/multinode-941000/config.json ...
	I0722 04:11:29.654539    3728 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/multinode-941000/config.json: {Name:mkfd8944d140fa971ce0647050e57d3334d7112d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:11:29.654771    3728 start.go:360] acquireMachinesLock for multinode-941000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:11:29.654813    3728 start.go:364] duration metric: took 34.5µs to acquireMachinesLock for "multinode-941000"
	I0722 04:11:29.654825    3728 start.go:93] Provisioning new machine with config: &{Name:multinode-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:11:29.654864    3728 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:11:29.664065    3728 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:11:29.684614    3728 start.go:159] libmachine.API.Create for "multinode-941000" (driver="qemu2")
	I0722 04:11:29.684653    3728 client.go:168] LocalClient.Create starting
	I0722 04:11:29.684738    3728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:11:29.684777    3728 main.go:141] libmachine: Decoding PEM data...
	I0722 04:11:29.684786    3728 main.go:141] libmachine: Parsing certificate...
	I0722 04:11:29.684836    3728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:11:29.684863    3728 main.go:141] libmachine: Decoding PEM data...
	I0722 04:11:29.684875    3728 main.go:141] libmachine: Parsing certificate...
	I0722 04:11:29.685260    3728 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:11:29.808298    3728 main.go:141] libmachine: Creating SSH key...
	I0722 04:11:29.898904    3728 main.go:141] libmachine: Creating Disk image...
	I0722 04:11:29.898911    3728 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:11:29.899088    3728 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:11:29.908153    3728 main.go:141] libmachine: STDOUT: 
	I0722 04:11:29.908171    3728 main.go:141] libmachine: STDERR: 
	I0722 04:11:29.908215    3728 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2 +20000M
	I0722 04:11:29.916066    3728 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:11:29.916080    3728 main.go:141] libmachine: STDERR: 
	I0722 04:11:29.916096    3728 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:11:29.916101    3728 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:11:29.916122    3728 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:11:29.916155    3728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:44:4c:7a:56:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:11:29.917742    3728 main.go:141] libmachine: STDOUT: 
	I0722 04:11:29.917757    3728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:11:29.917774    3728 client.go:171] duration metric: took 233.118584ms to LocalClient.Create
	I0722 04:11:31.919931    3728 start.go:128] duration metric: took 2.265072417s to createHost
	I0722 04:11:31.919981    3728 start.go:83] releasing machines lock for "multinode-941000", held for 2.265188083s
	W0722 04:11:31.920040    3728 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:11:31.932965    3728 out.go:177] * Deleting "multinode-941000" in qemu2 ...
	W0722 04:11:31.963574    3728 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:11:31.963679    3728 start.go:729] Will try again in 5 seconds ...
	I0722 04:11:36.965862    3728 start.go:360] acquireMachinesLock for multinode-941000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:11:36.966334    3728 start.go:364] duration metric: took 371.708µs to acquireMachinesLock for "multinode-941000"
	I0722 04:11:36.966431    3728 start.go:93] Provisioning new machine with config: &{Name:multinode-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.3 ClusterName:multinode-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:11:36.966662    3728 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:11:36.980289    3728 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:11:37.028269    3728 start.go:159] libmachine.API.Create for "multinode-941000" (driver="qemu2")
	I0722 04:11:37.028315    3728 client.go:168] LocalClient.Create starting
	I0722 04:11:37.028420    3728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:11:37.028475    3728 main.go:141] libmachine: Decoding PEM data...
	I0722 04:11:37.028490    3728 main.go:141] libmachine: Parsing certificate...
	I0722 04:11:37.028544    3728 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:11:37.028586    3728 main.go:141] libmachine: Decoding PEM data...
	I0722 04:11:37.028609    3728 main.go:141] libmachine: Parsing certificate...
	I0722 04:11:37.029082    3728 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:11:37.162978    3728 main.go:141] libmachine: Creating SSH key...
	I0722 04:11:37.229263    3728 main.go:141] libmachine: Creating Disk image...
	I0722 04:11:37.229269    3728 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:11:37.229435    3728 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:11:37.238595    3728 main.go:141] libmachine: STDOUT: 
	I0722 04:11:37.238608    3728 main.go:141] libmachine: STDERR: 
	I0722 04:11:37.238654    3728 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2 +20000M
	I0722 04:11:37.246285    3728 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:11:37.246298    3728 main.go:141] libmachine: STDERR: 
	I0722 04:11:37.246313    3728 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:11:37.246316    3728 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:11:37.246331    3728 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:11:37.246365    3728 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:95:e7:7e:5f:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:11:37.247868    3728 main.go:141] libmachine: STDOUT: 
	I0722 04:11:37.247885    3728 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:11:37.247897    3728 client.go:171] duration metric: took 219.580042ms to LocalClient.Create
	I0722 04:11:39.250048    3728 start.go:128] duration metric: took 2.283386584s to createHost
	I0722 04:11:39.250102    3728 start.go:83] releasing machines lock for "multinode-941000", held for 2.283769417s
	W0722 04:11:39.250485    3728 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-941000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:11:39.267219    3728 out.go:177] 
	W0722 04:11:39.271266    3728 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:11:39.271296    3728 out.go:239] * 
	* 
	W0722 04:11:39.273969    3728 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:11:39.285109    3728 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-941000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (63.42225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (98.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (126.510958ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-941000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- rollout status deployment/busybox: exit status 1 (55.727167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (56.035708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.062417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.973167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.79575ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (98.892041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.404708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.016458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (100.4475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.09375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.636125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
E0722 04:12:48.102472    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (101.8445ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.064541ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- exec  -- nslookup kubernetes.io: exit status 1 (53.350625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- exec  -- nslookup kubernetes.default: exit status 1 (53.337167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (54.038041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (28.852125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (98.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-941000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (54.144292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (28.153ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-941000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-941000 -v 3 --alsologtostderr: exit status 83 (50.6835ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-941000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-941000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:18.066352    3845 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:18.066658    3845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.066662    3845 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:18.066665    3845 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.066780    3845 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:18.066968    3845 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:18.067137    3845 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:18.074307    3845 out.go:177] * The control-plane node multinode-941000 host is not running: state=Stopped
	I0722 04:13:18.082311    3845 out.go:177]   To start a cluster, run: "minikube start -p multinode-941000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-941000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (29.01525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-941000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-941000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.820625ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-941000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-941000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-941000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (28.627042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-941000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-941000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-941000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"multinode-941000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (28.796458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status --output json --alsologtostderr: exit status 7 (27.49275ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-941000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:18.276213    3857 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:18.276354    3857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.276357    3857 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:18.276359    3857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.276486    3857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:18.276604    3857 out.go:298] Setting JSON to true
	I0722 04:13:18.276614    3857 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:18.276678    3857 notify.go:220] Checking for updates...
	I0722 04:13:18.276814    3857 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:18.276820    3857 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:18.277026    3857 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:18.277030    3857 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:18.277032    3857 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-941000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (28.376958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 node stop m03: exit status 85 (48.123625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-941000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status: exit status 7 (28.535917ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr: exit status 7 (27.85275ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:18.409963    3865 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:18.410094    3865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.410098    3865 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:18.410101    3865 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.410218    3865 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:18.410328    3865 out.go:298] Setting JSON to false
	I0722 04:13:18.410339    3865 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:18.410410    3865 notify.go:220] Checking for updates...
	I0722 04:13:18.410525    3865 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:18.410531    3865 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:18.410725    3865 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:18.410729    3865 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:18.410731    3865 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr": multinode-941000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (28.525292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.71375ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:18.467398    3869 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:18.467604    3869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.467607    3869 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:18.467610    3869 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.467732    3869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:18.467938    3869 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:18.468116    3869 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:18.473316    3869 out.go:177] 
	W0722 04:13:18.476323    3869 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0722 04:13:18.476327    3869 out.go:239] * 
	* 
	W0722 04:13:18.477962    3869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:13:18.482295    3869 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0722 04:13:18.467398    3869 out.go:291] Setting OutFile to fd 1 ...
I0722 04:13:18.467604    3869 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 04:13:18.467607    3869 out.go:304] Setting ErrFile to fd 2...
I0722 04:13:18.467610    3869 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 04:13:18.467732    3869 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
I0722 04:13:18.467938    3869 mustload.go:65] Loading cluster: multinode-941000
I0722 04:13:18.468116    3869 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 04:13:18.473316    3869 out.go:177] 
W0722 04:13:18.476323    3869 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0722 04:13:18.476327    3869 out.go:239] * 
* 
W0722 04:13:18.477962    3869 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0722 04:13:18.482295    3869 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-941000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (28.276791ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:18.514974    3871 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:18.515106    3871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.515109    3871 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:18.515116    3871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:18.515231    3871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:18.515349    3871 out.go:298] Setting JSON to false
	I0722 04:13:18.515359    3871 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:18.515403    3871 notify.go:220] Checking for updates...
	I0722 04:13:18.515546    3871 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:18.515553    3871 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:18.515743    3871 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:18.515747    3871 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:18.515749    3871 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (70.46675ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:19.508843    3873 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:19.509083    3873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:19.509088    3873 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:19.509092    3873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:19.509270    3873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:19.509476    3873 out.go:298] Setting JSON to false
	I0722 04:13:19.509493    3873 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:19.509544    3873 notify.go:220] Checking for updates...
	I0722 04:13:19.509780    3873 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:19.509790    3873 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:19.510086    3873 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:19.510091    3873 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:19.510094    3873 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (71.529625ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:21.371102    3877 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:21.371308    3877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:21.371313    3877 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:21.371317    3877 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:21.371516    3877 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:21.371706    3877 out.go:298] Setting JSON to false
	I0722 04:13:21.371721    3877 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:21.371769    3877 notify.go:220] Checking for updates...
	I0722 04:13:21.371992    3877 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:21.372002    3877 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:21.372279    3877 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:21.372284    3877 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:21.372287    3877 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (68.247167ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:23.661515    3879 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:23.661717    3879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:23.661722    3879 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:23.661725    3879 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:23.661878    3879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:23.662034    3879 out.go:298] Setting JSON to false
	I0722 04:13:23.662045    3879 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:23.662099    3879 notify.go:220] Checking for updates...
	I0722 04:13:23.662298    3879 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:23.662305    3879 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:23.662590    3879 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:23.662594    3879 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:23.662597    3879 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (70.535458ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:25.637952    3883 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:25.638158    3883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:25.638163    3883 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:25.638167    3883 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:25.638393    3883 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:25.638583    3883 out.go:298] Setting JSON to false
	I0722 04:13:25.638599    3883 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:25.638651    3883 notify.go:220] Checking for updates...
	I0722 04:13:25.638939    3883 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:25.638949    3883 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:25.639271    3883 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:25.639277    3883 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:25.639280    3883 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (68.497833ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:29.301723    3887 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:29.301911    3887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:29.301916    3887 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:29.301918    3887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:29.302094    3887 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:29.302257    3887 out.go:298] Setting JSON to false
	I0722 04:13:29.302272    3887 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:29.302312    3887 notify.go:220] Checking for updates...
	I0722 04:13:29.302559    3887 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:29.302568    3887 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:29.302914    3887 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:29.302920    3887 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:29.302923    3887 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (67.830209ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:39.658618    3893 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:39.658811    3893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:39.658816    3893 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:39.658820    3893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:39.659023    3893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:39.659201    3893 out.go:298] Setting JSON to false
	I0722 04:13:39.659216    3893 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:39.659262    3893 notify.go:220] Checking for updates...
	I0722 04:13:39.659536    3893 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:39.659545    3893 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:39.659851    3893 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:39.659857    3893 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:39.659861    3893 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (70.209417ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:13:54.035491    3903 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:13:54.035690    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:54.035695    3903 out.go:304] Setting ErrFile to fd 2...
	I0722 04:13:54.035699    3903 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:13:54.035896    3903 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:13:54.036063    3903 out.go:298] Setting JSON to false
	I0722 04:13:54.036078    3903 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:13:54.036129    3903 notify.go:220] Checking for updates...
	I0722 04:13:54.036348    3903 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:13:54.036357    3903 status.go:255] checking status of multinode-941000 ...
	I0722 04:13:54.036691    3903 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:13:54.036696    3903 status.go:343] host is not running, skipping remaining checks
	I0722 04:13:54.036699    3903 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr: exit status 7 (71.133667ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:14:15.670057    3918 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:14:15.670295    3918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:15.670300    3918 out.go:304] Setting ErrFile to fd 2...
	I0722 04:14:15.670304    3918 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:15.670479    3918 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:14:15.670679    3918 out.go:298] Setting JSON to false
	I0722 04:14:15.670695    3918 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:14:15.670730    3918 notify.go:220] Checking for updates...
	I0722 04:14:15.671030    3918 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:14:15.671043    3918 status.go:255] checking status of multinode-941000 ...
	I0722 04:14:15.671367    3918 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:14:15.671373    3918 status.go:343] host is not running, skipping remaining checks
	I0722 04:14:15.671376    3918 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-941000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (32.584208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (57.27s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-941000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-941000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-941000: (3.424452417s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-941000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-941000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.253535042s)

                                                
                                                
-- stdout --
	* [multinode-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-941000" primary control-plane node in "multinode-941000" cluster
	* Restarting existing qemu2 VM for "multinode-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:14:19.219483    3944 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:14:19.219656    3944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:19.219662    3944 out.go:304] Setting ErrFile to fd 2...
	I0722 04:14:19.219667    3944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:19.219842    3944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:14:19.221278    3944 out.go:298] Setting JSON to false
	I0722 04:14:19.241234    3944 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4428,"bootTime":1721642431,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:14:19.241300    3944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:14:19.247316    3944 out.go:177] * [multinode-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:14:19.254328    3944 notify.go:220] Checking for updates...
	I0722 04:14:19.258211    3944 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:14:19.262187    3944 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:14:19.270263    3944 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:14:19.277172    3944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:14:19.285205    3944 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:14:19.293201    3944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:14:19.297486    3944 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:14:19.297545    3944 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:14:19.302218    3944 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:14:19.312295    3944 start.go:297] selected driver: qemu2
	I0722 04:14:19.312304    3944 start.go:901] validating driver "qemu2" against &{Name:multinode-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:14:19.312373    3944 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:14:19.315174    3944 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:14:19.315228    3944 cni.go:84] Creating CNI manager for ""
	I0722 04:14:19.315235    3944 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 04:14:19.315293    3944 start.go:340] cluster config:
	{Name:multinode-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-941000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:14:19.319898    3944 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:19.322319    3944 out.go:177] * Starting "multinode-941000" primary control-plane node in "multinode-941000" cluster
	I0722 04:14:19.329272    3944 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:14:19.329303    3944 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:14:19.329317    3944 cache.go:56] Caching tarball of preloaded images
	I0722 04:14:19.329417    3944 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:14:19.329424    3944 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:14:19.329498    3944 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/multinode-941000/config.json ...
	I0722 04:14:19.329880    3944 start.go:360] acquireMachinesLock for multinode-941000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:14:19.329928    3944 start.go:364] duration metric: took 39.5µs to acquireMachinesLock for "multinode-941000"
	I0722 04:14:19.329939    3944 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:14:19.329947    3944 fix.go:54] fixHost starting: 
	I0722 04:14:19.330117    3944 fix.go:112] recreateIfNeeded on multinode-941000: state=Stopped err=<nil>
	W0722 04:14:19.330128    3944 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:14:19.339194    3944 out.go:177] * Restarting existing qemu2 VM for "multinode-941000" ...
	I0722 04:14:19.343220    3944 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:14:19.343272    3944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:95:e7:7e:5f:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:14:19.345622    3944 main.go:141] libmachine: STDOUT: 
	I0722 04:14:19.345647    3944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:14:19.345691    3944 fix.go:56] duration metric: took 15.734125ms for fixHost
	I0722 04:14:19.345698    3944 start.go:83] releasing machines lock for "multinode-941000", held for 15.763834ms
	W0722 04:14:19.345704    3944 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:14:19.345747    3944 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:14:19.345753    3944 start.go:729] Will try again in 5 seconds ...
	I0722 04:14:24.347924    3944 start.go:360] acquireMachinesLock for multinode-941000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:14:24.348446    3944 start.go:364] duration metric: took 388.292µs to acquireMachinesLock for "multinode-941000"
	I0722 04:14:24.348540    3944 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:14:24.348561    3944 fix.go:54] fixHost starting: 
	I0722 04:14:24.349268    3944 fix.go:112] recreateIfNeeded on multinode-941000: state=Stopped err=<nil>
	W0722 04:14:24.349295    3944 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:14:24.355870    3944 out.go:177] * Restarting existing qemu2 VM for "multinode-941000" ...
	I0722 04:14:24.365825    3944 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:14:24.366084    3944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:95:e7:7e:5f:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:14:24.375047    3944 main.go:141] libmachine: STDOUT: 
	I0722 04:14:24.375117    3944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:14:24.375195    3944 fix.go:56] duration metric: took 26.631583ms for fixHost
	I0722 04:14:24.375214    3944 start.go:83] releasing machines lock for "multinode-941000", held for 26.744542ms
	W0722 04:14:24.375394    3944 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:14:24.383833    3944 out.go:177] 
	W0722 04:14:24.386818    3944 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:14:24.386851    3944 out.go:239] * 
	* 
	W0722 04:14:24.389299    3944 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:14:24.398806    3944 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-941000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-941000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (32.445125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.81s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 node delete m03: exit status 83 (43.649042ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-941000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-941000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-941000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr: exit status 7 (28.448958ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:14:24.582496    3974 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:14:24.582630    3974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:24.582634    3974 out.go:304] Setting ErrFile to fd 2...
	I0722 04:14:24.582636    3974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:24.582766    3974 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:14:24.582888    3974 out.go:298] Setting JSON to false
	I0722 04:14:24.582898    3974 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:14:24.582966    3974 notify.go:220] Checking for updates...
	I0722 04:14:24.583095    3974 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:14:24.583102    3974 status.go:255] checking status of multinode-941000 ...
	I0722 04:14:24.583303    3974 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:14:24.583307    3974 status.go:343] host is not running, skipping remaining checks
	I0722 04:14:24.583309    3974 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (28.645166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (1.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-941000 stop: (1.798700041s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status: exit status 7 (64.519042ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr: exit status 7 (31.500125ms)

                                                
                                                
-- stdout --
	multinode-941000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:14:26.506565    3990 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:14:26.506702    3990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:26.506705    3990 out.go:304] Setting ErrFile to fd 2...
	I0722 04:14:26.506708    3990 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:26.506843    3990 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:14:26.506960    3990 out.go:298] Setting JSON to false
	I0722 04:14:26.506970    3990 mustload.go:65] Loading cluster: multinode-941000
	I0722 04:14:26.507043    3990 notify.go:220] Checking for updates...
	I0722 04:14:26.507165    3990 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:14:26.507171    3990 status.go:255] checking status of multinode-941000 ...
	I0722 04:14:26.507401    3990 status.go:330] multinode-941000 host status = "Stopped" (err=<nil>)
	I0722 04:14:26.507405    3990 status.go:343] host is not running, skipping remaining checks
	I0722 04:14:26.507407    3990 status.go:257] multinode-941000 status: &{Name:multinode-941000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr": multinode-941000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-941000 status --alsologtostderr": multinode-941000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (27.723917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (1.92s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-941000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-941000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.193014125s)

                                                
                                                
-- stdout --
	* [multinode-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-941000" primary control-plane node in "multinode-941000" cluster
	* Restarting existing qemu2 VM for "multinode-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-941000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:14:26.561858    3994 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:14:26.561991    3994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:26.561994    3994 out.go:304] Setting ErrFile to fd 2...
	I0722 04:14:26.561997    3994 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:26.562117    3994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:14:26.563121    3994 out.go:298] Setting JSON to false
	I0722 04:14:26.578913    3994 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4435,"bootTime":1721642431,"procs":464,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:14:26.578986    3994 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:14:26.584114    3994 out.go:177] * [multinode-941000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:14:26.590082    3994 notify.go:220] Checking for updates...
	I0722 04:14:26.594046    3994 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:14:26.602031    3994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:14:26.609055    3994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:14:26.612001    3994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:14:26.615041    3994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:14:26.618172    3994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:14:26.621284    3994 config.go:182] Loaded profile config "multinode-941000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:14:26.621536    3994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:14:26.625012    3994 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:14:26.631016    3994 start.go:297] selected driver: qemu2
	I0722 04:14:26.631023    3994 start.go:901] validating driver "qemu2" against &{Name:multinode-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:multinode-941000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:14:26.631072    3994 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:14:26.633464    3994 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:14:26.633486    3994 cni.go:84] Creating CNI manager for ""
	I0722 04:14:26.633491    3994 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0722 04:14:26.633542    3994 start.go:340] cluster config:
	{Name:multinode-941000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-941000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:14:26.637188    3994 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:26.646059    3994 out.go:177] * Starting "multinode-941000" primary control-plane node in "multinode-941000" cluster
	I0722 04:14:26.650041    3994 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:14:26.650065    3994 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:14:26.650076    3994 cache.go:56] Caching tarball of preloaded images
	I0722 04:14:26.650133    3994 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:14:26.650140    3994 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:14:26.650195    3994 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/multinode-941000/config.json ...
	I0722 04:14:26.650584    3994 start.go:360] acquireMachinesLock for multinode-941000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:14:26.650615    3994 start.go:364] duration metric: took 23.583µs to acquireMachinesLock for "multinode-941000"
	I0722 04:14:26.650626    3994 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:14:26.650634    3994 fix.go:54] fixHost starting: 
	I0722 04:14:26.650758    3994 fix.go:112] recreateIfNeeded on multinode-941000: state=Stopped err=<nil>
	W0722 04:14:26.650767    3994 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:14:26.659041    3994 out.go:177] * Restarting existing qemu2 VM for "multinode-941000" ...
	I0722 04:14:26.663039    3994 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:14:26.663079    3994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:95:e7:7e:5f:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:14:26.665177    3994 main.go:141] libmachine: STDOUT: 
	I0722 04:14:26.665199    3994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:14:26.665232    3994 fix.go:56] duration metric: took 14.598083ms for fixHost
	I0722 04:14:26.665237    3994 start.go:83] releasing machines lock for "multinode-941000", held for 14.615291ms
	W0722 04:14:26.665243    3994 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:14:26.665279    3994 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:14:26.665284    3994 start.go:729] Will try again in 5 seconds ...
	I0722 04:14:31.667494    3994 start.go:360] acquireMachinesLock for multinode-941000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:14:31.667896    3994 start.go:364] duration metric: took 300.208µs to acquireMachinesLock for "multinode-941000"
	I0722 04:14:31.667992    3994 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:14:31.668016    3994 fix.go:54] fixHost starting: 
	I0722 04:14:31.668720    3994 fix.go:112] recreateIfNeeded on multinode-941000: state=Stopped err=<nil>
	W0722 04:14:31.668746    3994 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:14:31.673459    3994 out.go:177] * Restarting existing qemu2 VM for "multinode-941000" ...
	I0722 04:14:31.684265    3994 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:14:31.684569    3994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:95:e7:7e:5f:b7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/multinode-941000/disk.qcow2
	I0722 04:14:31.693806    3994 main.go:141] libmachine: STDOUT: 
	I0722 04:14:31.693866    3994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:14:31.693947    3994 fix.go:56] duration metric: took 25.93375ms for fixHost
	I0722 04:14:31.693968    3994 start.go:83] releasing machines lock for "multinode-941000", held for 26.051083ms
	W0722 04:14:31.694184    3994 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-941000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:14:31.701134    3994 out.go:177] 
	W0722 04:14:31.705140    3994 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:14:31.705188    3994 out.go:239] * 
	* 
	W0722 04:14:31.707938    3994 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:14:31.716192    3994 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-941000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (67.19275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-941000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-941000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-941000-m01 --driver=qemu2 : exit status 80 (9.839911333s)

                                                
                                                
-- stdout --
	* [multinode-941000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-941000-m01" primary control-plane node in "multinode-941000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-941000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-941000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-941000-m02 --driver=qemu2 
E0722 04:14:47.486405    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-941000-m02 --driver=qemu2 : exit status 80 (9.968024417s)

                                                
                                                
-- stdout --
	* [multinode-941000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-941000-m02" primary control-plane node in "multinode-941000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-941000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-941000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-941000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-941000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-941000: exit status 83 (82.04075ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-941000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-941000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-941000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-941000 -n multinode-941000: exit status 7 (29.12075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-941000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.03s)

                                                
                                    
x
+
TestPreload (9.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-171000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-171000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.767955833s)

                                                
                                                
-- stdout --
	* [test-preload-171000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-171000" primary control-plane node in "test-preload-171000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-171000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:14:51.960499    4062 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:14:51.960676    4062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:51.960680    4062 out.go:304] Setting ErrFile to fd 2...
	I0722 04:14:51.960683    4062 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:14:51.960791    4062 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:14:51.962053    4062 out.go:298] Setting JSON to false
	I0722 04:14:51.978403    4062 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4460,"bootTime":1721642431,"procs":465,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:14:51.978475    4062 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:14:51.984214    4062 out.go:177] * [test-preload-171000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:14:51.992224    4062 notify.go:220] Checking for updates...
	I0722 04:14:52.000194    4062 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:14:52.007102    4062 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:14:52.010152    4062 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:14:52.016138    4062 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:14:52.020134    4062 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:14:52.027076    4062 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:14:52.031435    4062 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:14:52.031502    4062 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:14:52.035070    4062 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:14:52.041970    4062 start.go:297] selected driver: qemu2
	I0722 04:14:52.041976    4062 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:14:52.041982    4062 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:14:52.044505    4062 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:14:52.049160    4062 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:14:52.053321    4062 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:14:52.053353    4062 cni.go:84] Creating CNI manager for ""
	I0722 04:14:52.053362    4062 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:14:52.053366    4062 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:14:52.053398    4062 start.go:340] cluster config:
	{Name:test-preload-171000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:14:52.057400    4062 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.066144    4062 out.go:177] * Starting "test-preload-171000" primary control-plane node in "test-preload-171000" cluster
	I0722 04:14:52.069092    4062 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0722 04:14:52.069215    4062 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/test-preload-171000/config.json ...
	I0722 04:14:52.069234    4062 cache.go:107] acquiring lock: {Name:mk0a4a038b81605f387adfa4e74fec8a71c61136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069244    4062 cache.go:107] acquiring lock: {Name:mk9dd4e3af660270416652a3bd406c11f79f9580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069263    4062 cache.go:107] acquiring lock: {Name:mkf4963431451fe130955a1657d5d779e832ab78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069283    4062 cache.go:107] acquiring lock: {Name:mk4e56718e4d983dbd7a995b68cfe262b207eee9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069250    4062 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/test-preload-171000/config.json: {Name:mk5cd6e48f8bc37d650e7c9b1769d05f45f1ec3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:14:52.069349    4062 cache.go:107] acquiring lock: {Name:mkdbfdca2c55f81070a063415433b4ad96c8932c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069477    4062 cache.go:107] acquiring lock: {Name:mk662b6d35ab77f05ce8301de8e586681ebc85e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069474    4062 cache.go:107] acquiring lock: {Name:mkaa3d841fa95d2daf36c0569b97d901df74a890 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069504    4062 cache.go:107] acquiring lock: {Name:mkd06923016bd2c560843b4243ed44b952ea81c3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:14:52.069684    4062 start.go:360] acquireMachinesLock for test-preload-171000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:14:52.069700    4062 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0722 04:14:52.069744    4062 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0722 04:14:52.069776    4062 start.go:364] duration metric: took 76.166µs to acquireMachinesLock for "test-preload-171000"
	I0722 04:14:52.069796    4062 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:14:52.069812    4062 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:14:52.069788    4062 start.go:93] Provisioning new machine with config: &{Name:test-preload-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:14:52.069854    4062 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:14:52.069907    4062 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0722 04:14:52.069986    4062 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0722 04:14:52.069912    4062 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0722 04:14:52.069914    4062 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:14:52.073975    4062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:14:52.082166    4062 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:14:52.082194    4062 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0722 04:14:52.082428    4062 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0722 04:14:52.082793    4062 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:14:52.083484    4062 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0722 04:14:52.083680    4062 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0722 04:14:52.084230    4062 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:14:52.084295    4062 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0722 04:14:52.093347    4062 start.go:159] libmachine.API.Create for "test-preload-171000" (driver="qemu2")
	I0722 04:14:52.093370    4062 client.go:168] LocalClient.Create starting
	I0722 04:14:52.093503    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:14:52.093549    4062 main.go:141] libmachine: Decoding PEM data...
	I0722 04:14:52.093559    4062 main.go:141] libmachine: Parsing certificate...
	I0722 04:14:52.093608    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:14:52.093638    4062 main.go:141] libmachine: Decoding PEM data...
	I0722 04:14:52.093648    4062 main.go:141] libmachine: Parsing certificate...
	I0722 04:14:52.094014    4062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:14:52.229016    4062 main.go:141] libmachine: Creating SSH key...
	I0722 04:14:52.325686    4062 main.go:141] libmachine: Creating Disk image...
	I0722 04:14:52.325709    4062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:14:52.325931    4062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2
	I0722 04:14:52.335723    4062 main.go:141] libmachine: STDOUT: 
	I0722 04:14:52.335738    4062 main.go:141] libmachine: STDERR: 
	I0722 04:14:52.335787    4062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2 +20000M
	I0722 04:14:52.344712    4062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:14:52.344736    4062 main.go:141] libmachine: STDERR: 
	I0722 04:14:52.344760    4062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2
	I0722 04:14:52.344765    4062 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:14:52.344779    4062 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:14:52.344807    4062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:69:54:5b:e3:c1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2
	I0722 04:14:52.346729    4062 main.go:141] libmachine: STDOUT: 
	I0722 04:14:52.346746    4062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:14:52.346767    4062 client.go:171] duration metric: took 253.396542ms to LocalClient.Create
	I0722 04:14:52.535424    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0722 04:14:52.569458    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0722 04:14:52.577743    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0722 04:14:52.620029    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0722 04:14:52.621134    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0722 04:14:52.670154    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0722 04:14:52.670489    4062 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0722 04:14:52.670551    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0722 04:14:52.719670    4062 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0722 04:14:52.719744    4062 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 650.463541ms
	I0722 04:14:52.719781    4062 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0722 04:14:53.989300    4062 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0722 04:14:53.989373    4062 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 1.920016s
	I0722 04:14:53.989415    4062 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0722 04:14:54.052588    4062 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0722 04:14:54.052654    4062 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 1.983169834s
	I0722 04:14:54.052705    4062 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0722 04:14:54.346944    4062 start.go:128] duration metric: took 2.277094375s to createHost
	I0722 04:14:54.347002    4062 start.go:83] releasing machines lock for "test-preload-171000", held for 2.277245292s
	W0722 04:14:54.347057    4062 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:14:54.362248    4062 out.go:177] * Deleting "test-preload-171000" in qemu2 ...
	W0722 04:14:54.386739    4062 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:14:54.386775    4062 start.go:729] Will try again in 5 seconds ...
	W0722 04:14:55.113636    4062 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0722 04:14:55.113760    4062 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 04:14:55.676889    4062 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0722 04:14:55.676932    4062 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 3.607741334s
	I0722 04:14:55.676961    4062 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0722 04:14:56.486357    4062 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0722 04:14:56.486410    4062 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.417229291s
	I0722 04:14:56.486434    4062 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0722 04:14:56.735939    4062 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0722 04:14:56.735988    4062 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.666624542s
	I0722 04:14:56.736015    4062 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0722 04:14:56.926206    4062 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0722 04:14:56.926275    4062 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 4.85698925s
	I0722 04:14:56.926305    4062 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0722 04:14:59.387929    4062 start.go:360] acquireMachinesLock for test-preload-171000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:14:59.388333    4062 start.go:364] duration metric: took 331.375µs to acquireMachinesLock for "test-preload-171000"
	I0722 04:14:59.388425    4062 start.go:93] Provisioning new machine with config: &{Name:test-preload-171000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-171000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:14:59.388675    4062 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:14:59.408282    4062 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:14:59.459830    4062 start.go:159] libmachine.API.Create for "test-preload-171000" (driver="qemu2")
	I0722 04:14:59.459876    4062 client.go:168] LocalClient.Create starting
	I0722 04:14:59.460003    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:14:59.460063    4062 main.go:141] libmachine: Decoding PEM data...
	I0722 04:14:59.460078    4062 main.go:141] libmachine: Parsing certificate...
	I0722 04:14:59.460137    4062 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:14:59.460181    4062 main.go:141] libmachine: Decoding PEM data...
	I0722 04:14:59.460196    4062 main.go:141] libmachine: Parsing certificate...
	I0722 04:14:59.460640    4062 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:14:59.597912    4062 main.go:141] libmachine: Creating SSH key...
	I0722 04:14:59.625213    4062 main.go:141] libmachine: Creating Disk image...
	I0722 04:14:59.625219    4062 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:14:59.625400    4062 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2
	I0722 04:14:59.634387    4062 main.go:141] libmachine: STDOUT: 
	I0722 04:14:59.634405    4062 main.go:141] libmachine: STDERR: 
	I0722 04:14:59.634454    4062 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2 +20000M
	I0722 04:14:59.642294    4062 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:14:59.642309    4062 main.go:141] libmachine: STDERR: 
	I0722 04:14:59.642326    4062 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2
	I0722 04:14:59.642328    4062 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:14:59.642342    4062 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:14:59.642377    4062 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:4e:69:af:dc:c8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/test-preload-171000/disk.qcow2
	I0722 04:14:59.643956    4062 main.go:141] libmachine: STDOUT: 
	I0722 04:14:59.643972    4062 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:14:59.643985    4062 client.go:171] duration metric: took 184.104666ms to LocalClient.Create
	I0722 04:15:01.644489    4062 start.go:128] duration metric: took 2.255786375s to createHost
	I0722 04:15:01.644590    4062 start.go:83] releasing machines lock for "test-preload-171000", held for 2.25626125s
	W0722 04:15:01.644850    4062 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-171000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:15:01.662449    4062 out.go:177] 
	W0722 04:15:01.670567    4062 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:15:01.670603    4062 out.go:239] * 
	* 
	W0722 04:15:01.672805    4062 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:15:01.687462    4062 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-171000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-07-22 04:15:01.703 -0700 PDT m=+2809.977596751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-171000 -n test-preload-171000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-171000 -n test-preload-171000: exit status 7 (63.246625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-171000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-171000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-171000
--- FAIL: TestPreload (9.91s)

                                                
                                    
x
+
TestScheduledStopUnix (9.87s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-268000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-268000 --memory=2048 --driver=qemu2 : exit status 80 (9.7249725s)

                                                
                                                
-- stdout --
	* [scheduled-stop-268000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-268000" primary control-plane node in "scheduled-stop-268000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-268000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-268000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-268000" primary control-plane node in "scheduled-stop-268000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-268000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-22 04:15:11.569912 -0700 PDT m=+2819.844632001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-268000 -n scheduled-stop-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-268000 -n scheduled-stop-268000: exit status 7 (65.264875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-268000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-268000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-268000
--- FAIL: TestScheduledStopUnix (9.87s)

                                                
                                    
x
+
TestSkaffold (13.03s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1345949961 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe1345949961 version: (1.060504375s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-202000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-202000 --memory=2600 --driver=qemu2 : exit status 80 (9.952627291s)

                                                
                                                
-- stdout --
	* [skaffold-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-202000" primary control-plane node in "skaffold-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-202000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-202000" primary control-plane node in "skaffold-202000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-202000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-202000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-07-22 04:15:24.60666 -0700 PDT m=+2832.881541793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-202000 -n skaffold-202000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-202000 -n skaffold-202000: exit status 7 (55.847916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-202000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-202000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-202000
--- FAIL: TestSkaffold (13.03s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (613.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.90591632 start -p running-upgrade-724000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.90591632 start -p running-upgrade-724000 --memory=2200 --vm-driver=qemu2 : (57.638039666s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-724000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0722 04:17:48.098422    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 04:17:50.551644    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-724000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.53313275s)

                                                
                                                
-- stdout --
	* [running-upgrade-724000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-724000" primary control-plane node in "running-upgrade-724000" cluster
	* Updating the running qemu2 "running-upgrade-724000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:17:05.434351    4522 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:17:05.434495    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:17:05.434498    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:17:05.434501    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:17:05.434637    4522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:17:05.435658    4522 out.go:298] Setting JSON to false
	I0722 04:17:05.452915    4522 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4594,"bootTime":1721642431,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:17:05.452991    4522 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:17:05.458085    4522 out.go:177] * [running-upgrade-724000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:17:05.466122    4522 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:17:05.466181    4522 notify.go:220] Checking for updates...
	I0722 04:17:05.474101    4522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:17:05.478130    4522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:17:05.481105    4522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:17:05.484107    4522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:17:05.487048    4522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:17:05.490336    4522 config.go:182] Loaded profile config "running-upgrade-724000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:17:05.493074    4522 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0722 04:17:05.494154    4522 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:17:05.498070    4522 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:17:05.504930    4522 start.go:297] selected driver: qemu2
	I0722 04:17:05.504935    4522 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:17:05.504996    4522 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:17:05.507421    4522 cni.go:84] Creating CNI manager for ""
	I0722 04:17:05.507440    4522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:17:05.507472    4522 start.go:340] cluster config:
	{Name:running-upgrade-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:17:05.507525    4522 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:17:05.515123    4522 out.go:177] * Starting "running-upgrade-724000" primary control-plane node in "running-upgrade-724000" cluster
	I0722 04:17:05.518997    4522 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0722 04:17:05.519011    4522 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0722 04:17:05.519018    4522 cache.go:56] Caching tarball of preloaded images
	I0722 04:17:05.519075    4522 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:17:05.519082    4522 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0722 04:17:05.519132    4522 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/config.json ...
	I0722 04:17:05.519475    4522 start.go:360] acquireMachinesLock for running-upgrade-724000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:17:05.519511    4522 start.go:364] duration metric: took 30.166µs to acquireMachinesLock for "running-upgrade-724000"
	I0722 04:17:05.519519    4522 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:17:05.519525    4522 fix.go:54] fixHost starting: 
	I0722 04:17:05.520226    4522 fix.go:112] recreateIfNeeded on running-upgrade-724000: state=Running err=<nil>
	W0722 04:17:05.520236    4522 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:17:05.524101    4522 out.go:177] * Updating the running qemu2 "running-upgrade-724000" VM ...
	I0722 04:17:05.532058    4522 machine.go:94] provisionDockerMachine start ...
	I0722 04:17:05.532098    4522 main.go:141] libmachine: Using SSH client type: native
	I0722 04:17:05.532214    4522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011e2a10] 0x1011e5270 <nil>  [] 0s} localhost 50231 <nil> <nil>}
	I0722 04:17:05.532219    4522 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 04:17:05.598339    4522 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-724000
	
	I0722 04:17:05.598354    4522 buildroot.go:166] provisioning hostname "running-upgrade-724000"
	I0722 04:17:05.598395    4522 main.go:141] libmachine: Using SSH client type: native
	I0722 04:17:05.598508    4522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011e2a10] 0x1011e5270 <nil>  [] 0s} localhost 50231 <nil> <nil>}
	I0722 04:17:05.598514    4522 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-724000 && echo "running-upgrade-724000" | sudo tee /etc/hostname
	I0722 04:17:05.667247    4522 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-724000
	
	I0722 04:17:05.667303    4522 main.go:141] libmachine: Using SSH client type: native
	I0722 04:17:05.667413    4522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011e2a10] 0x1011e5270 <nil>  [] 0s} localhost 50231 <nil> <nil>}
	I0722 04:17:05.667421    4522 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-724000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-724000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-724000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 04:17:05.734863    4522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:17:05.734875    4522 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1127/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1127/.minikube}
	I0722 04:17:05.734884    4522 buildroot.go:174] setting up certificates
	I0722 04:17:05.734888    4522 provision.go:84] configureAuth start
	I0722 04:17:05.734892    4522 provision.go:143] copyHostCerts
	I0722 04:17:05.734960    4522 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem, removing ...
	I0722 04:17:05.734967    4522 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem
	I0722 04:17:05.735092    4522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem (1675 bytes)
	I0722 04:17:05.735267    4522 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem, removing ...
	I0722 04:17:05.735270    4522 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem
	I0722 04:17:05.735324    4522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem (1078 bytes)
	I0722 04:17:05.735423    4522 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem, removing ...
	I0722 04:17:05.735434    4522 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem
	I0722 04:17:05.735486    4522 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem (1123 bytes)
	I0722 04:17:05.735610    4522 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-724000 san=[127.0.0.1 localhost minikube running-upgrade-724000]
	I0722 04:17:05.781882    4522 provision.go:177] copyRemoteCerts
	I0722 04:17:05.781921    4522 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 04:17:05.781928    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	I0722 04:17:05.817259    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 04:17:05.825094    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 04:17:05.834094    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 04:17:05.840674    4522 provision.go:87] duration metric: took 105.782417ms to configureAuth
	I0722 04:17:05.840682    4522 buildroot.go:189] setting minikube options for container-runtime
	I0722 04:17:05.840788    4522 config.go:182] Loaded profile config "running-upgrade-724000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:17:05.840826    4522 main.go:141] libmachine: Using SSH client type: native
	I0722 04:17:05.840948    4522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011e2a10] 0x1011e5270 <nil>  [] 0s} localhost 50231 <nil> <nil>}
	I0722 04:17:05.840953    4522 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 04:17:05.906885    4522 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 04:17:05.906897    4522 buildroot.go:70] root file system type: tmpfs
	I0722 04:17:05.906950    4522 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 04:17:05.907014    4522 main.go:141] libmachine: Using SSH client type: native
	I0722 04:17:05.907119    4522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011e2a10] 0x1011e5270 <nil>  [] 0s} localhost 50231 <nil> <nil>}
	I0722 04:17:05.907154    4522 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 04:17:05.977734    4522 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 04:17:05.977785    4522 main.go:141] libmachine: Using SSH client type: native
	I0722 04:17:05.977904    4522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011e2a10] 0x1011e5270 <nil>  [] 0s} localhost 50231 <nil> <nil>}
	I0722 04:17:05.977913    4522 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 04:17:06.048145    4522 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:17:06.048156    4522 machine.go:97] duration metric: took 516.099042ms to provisionDockerMachine
	I0722 04:17:06.048162    4522 start.go:293] postStartSetup for "running-upgrade-724000" (driver="qemu2")
	I0722 04:17:06.048168    4522 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 04:17:06.048214    4522 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 04:17:06.048223    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	I0722 04:17:06.084533    4522 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 04:17:06.085905    4522 info.go:137] Remote host: Buildroot 2021.02.12
	I0722 04:17:06.085912    4522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/addons for local assets ...
	I0722 04:17:06.085985    4522 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/files for local assets ...
	I0722 04:17:06.086096    4522 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
	I0722 04:17:06.086216    4522 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 04:17:06.088910    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
	I0722 04:17:06.096207    4522 start.go:296] duration metric: took 48.040667ms for postStartSetup
	I0722 04:17:06.096222    4522 fix.go:56] duration metric: took 576.704917ms for fixHost
	I0722 04:17:06.096255    4522 main.go:141] libmachine: Using SSH client type: native
	I0722 04:17:06.096358    4522 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1011e2a10] 0x1011e5270 <nil>  [] 0s} localhost 50231 <nil> <nil>}
	I0722 04:17:06.096362    4522 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 04:17:06.164134    4522 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721647025.999932013
	
	I0722 04:17:06.164141    4522 fix.go:216] guest clock: 1721647025.999932013
	I0722 04:17:06.164145    4522 fix.go:229] Guest: 2024-07-22 04:17:05.999932013 -0700 PDT Remote: 2024-07-22 04:17:06.096224 -0700 PDT m=+0.682777626 (delta=-96.291987ms)
	I0722 04:17:06.164156    4522 fix.go:200] guest clock delta is within tolerance: -96.291987ms
	I0722 04:17:06.164159    4522 start.go:83] releasing machines lock for "running-upgrade-724000", held for 644.651167ms
	I0722 04:17:06.164209    4522 ssh_runner.go:195] Run: cat /version.json
	I0722 04:17:06.164217    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	I0722 04:17:06.164209    4522 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 04:17:06.164239    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	W0722 04:17:06.164782    4522 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50231: connect: connection refused
	I0722 04:17:06.164804    4522 retry.go:31] will retry after 173.332991ms: dial tcp [::1]:50231: connect: connection refused
	W0722 04:17:06.376678    4522 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0722 04:17:06.376739    4522 ssh_runner.go:195] Run: systemctl --version
	I0722 04:17:06.378804    4522 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 04:17:06.380587    4522 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 04:17:06.380618    4522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0722 04:17:06.383932    4522 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0722 04:17:06.388580    4522 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 04:17:06.388589    4522 start.go:495] detecting cgroup driver to use...
	I0722 04:17:06.388689    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:17:06.393653    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0722 04:17:06.397121    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 04:17:06.400452    4522 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 04:17:06.400475    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 04:17:06.403742    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:17:06.406541    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 04:17:06.409541    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:17:06.412908    4522 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 04:17:06.416268    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 04:17:06.419212    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 04:17:06.421997    4522 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 04:17:06.424840    4522 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 04:17:06.431821    4522 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 04:17:06.434578    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:17:06.527026    4522 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 04:17:06.533561    4522 start.go:495] detecting cgroup driver to use...
	I0722 04:17:06.533625    4522 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 04:17:06.541697    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:17:06.546650    4522 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 04:17:06.556444    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:17:06.561264    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:17:06.566344    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:17:06.571588    4522 ssh_runner.go:195] Run: which cri-dockerd
	I0722 04:17:06.573152    4522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 04:17:06.575865    4522 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 04:17:06.581035    4522 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 04:17:06.681779    4522 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 04:17:06.772795    4522 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 04:17:06.772858    4522 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 04:17:06.778113    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:17:06.872323    4522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:17:20.272958    4522 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.400783667s)
	I0722 04:17:20.273025    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 04:17:20.278064    4522 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0722 04:17:20.285941    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:17:20.292329    4522 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 04:17:20.379987    4522 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 04:17:20.455954    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:17:20.536641    4522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 04:17:20.542468    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:17:20.547907    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:17:20.634054    4522 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 04:17:20.672129    4522 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 04:17:20.672213    4522 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 04:17:20.675838    4522 start.go:563] Will wait 60s for crictl version
	I0722 04:17:20.675885    4522 ssh_runner.go:195] Run: which crictl
	I0722 04:17:20.677051    4522 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 04:17:20.688664    4522 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0722 04:17:20.688725    4522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:17:20.701217    4522 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:17:20.720947    4522 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0722 04:17:20.721018    4522 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0722 04:17:20.722296    4522 kubeadm.go:883] updating cluster {Name:running-upgrade-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0722 04:17:20.722335    4522 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0722 04:17:20.722370    4522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:17:20.732397    4522 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 04:17:20.732423    4522 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0722 04:17:20.732466    4522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 04:17:20.735401    4522 ssh_runner.go:195] Run: which lz4
	I0722 04:17:20.736844    4522 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 04:17:20.738096    4522 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 04:17:20.738108    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0722 04:17:21.698114    4522 docker.go:649] duration metric: took 961.302625ms to copy over tarball
	I0722 04:17:21.698169    4522 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 04:17:22.836294    4522 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.138113458s)
	I0722 04:17:22.836310    4522 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 04:17:22.851827    4522 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 04:17:22.854871    4522 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0722 04:17:22.860255    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:17:22.935554    4522 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:17:23.104112    4522 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:17:23.115044    4522 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 04:17:23.115052    4522 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0722 04:17:23.115057    4522 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 04:17:23.120313    4522 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:17:23.122182    4522 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:17:23.123882    4522 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:17:23.124084    4522 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:17:23.125286    4522 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:17:23.125680    4522 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:17:23.126447    4522 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:17:23.128170    4522 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0722 04:17:23.128585    4522 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:17:23.128619    4522 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:17:23.128745    4522 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:17:23.130271    4522 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:17:23.130616    4522 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0722 04:17:23.130855    4522 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:17:23.132556    4522 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:17:23.132577    4522 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:17:23.524399    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:17:23.539547    4522 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0722 04:17:23.539578    4522 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:17:23.539645    4522 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:17:23.551209    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0722 04:17:23.577046    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0722 04:17:23.577728    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:17:23.582917    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	W0722 04:17:23.587004    4522 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0722 04:17:23.587132    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:17:23.588760    4522 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0722 04:17:23.588780    4522 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0722 04:17:23.588820    4522 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0722 04:17:23.591236    4522 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0722 04:17:23.591252    4522 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:17:23.591305    4522 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:17:23.613844    4522 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0722 04:17:23.613863    4522 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:17:23.613869    4522 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0722 04:17:23.613878    4522 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:17:23.613921    4522 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0722 04:17:23.613921    4522 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:17:23.613936    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0722 04:17:23.613937    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0722 04:17:23.614016    4522 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0722 04:17:23.618344    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:17:23.628962    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0722 04:17:23.628961    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0722 04:17:23.629068    4522 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0722 04:17:23.629075    4522 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0722 04:17:23.629085    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0722 04:17:23.634238    4522 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0722 04:17:23.634248    4522 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0722 04:17:23.634257    4522 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:17:23.634268    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0722 04:17:23.634296    4522 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:17:23.636192    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:17:23.651152    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0722 04:17:23.657470    4522 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0722 04:17:23.657485    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0722 04:17:23.659058    4522 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0722 04:17:23.659078    4522 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:17:23.659152    4522 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:17:23.717395    4522 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0722 04:17:23.717407    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0722 04:17:23.722683    4522 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0722 04:17:23.722693    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0722 04:17:23.760158    4522 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0722 04:17:25.850299    4522 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0722 04:17:25.850420    4522 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:17:25.892479    4522 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0722 04:17:25.892508    4522 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:17:25.892563    4522 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:17:25.923586    4522 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 04:17:25.923704    4522 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0722 04:17:25.925047    4522 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0722 04:17:25.925060    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0722 04:17:25.992750    4522 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 04:17:25.992778    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0722 04:17:26.383494    4522 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 04:17:26.383533    4522 cache_images.go:92] duration metric: took 3.268510667s to LoadCachedImages
	W0722 04:17:26.383570    4522 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0722 04:17:26.383575    4522 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0722 04:17:26.383627    4522 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-724000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 04:17:26.383691    4522 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 04:17:26.418364    4522 cni.go:84] Creating CNI manager for ""
	I0722 04:17:26.418375    4522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:17:26.418380    4522 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 04:17:26.418388    4522 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-724000 NodeName:running-upgrade-724000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 04:17:26.418453    4522 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-724000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 04:17:26.418508    4522 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0722 04:17:26.421704    4522 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 04:17:26.421735    4522 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 04:17:26.424524    4522 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0722 04:17:26.429371    4522 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 04:17:26.436415    4522 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0722 04:17:26.442637    4522 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0722 04:17:26.444010    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:17:26.522507    4522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:17:26.527703    4522 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000 for IP: 10.0.2.15
	I0722 04:17:26.527710    4522 certs.go:194] generating shared ca certs ...
	I0722 04:17:26.527718    4522 certs.go:226] acquiring lock for ca certs: {Name:mk3f2c80d56e217629ae5cc59f1253ebc769d305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:17:26.527874    4522 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key
	I0722 04:17:26.527928    4522 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key
	I0722 04:17:26.527932    4522 certs.go:256] generating profile certs ...
	I0722 04:17:26.528017    4522 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/client.key
	I0722 04:17:26.528032    4522 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.key.22db1849
	I0722 04:17:26.528046    4522 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.crt.22db1849 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0722 04:17:26.781589    4522 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.crt.22db1849 ...
	I0722 04:17:26.781608    4522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.crt.22db1849: {Name:mk9e615252d177a9b3d5e2e45f0a9764aed2135e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:17:26.785685    4522 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.key.22db1849 ...
	I0722 04:17:26.785692    4522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.key.22db1849: {Name:mkb171389c6c2590cc4b0ee2f6eec39dc656ecc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:17:26.785878    4522 certs.go:381] copying /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.crt.22db1849 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.crt
	I0722 04:17:26.788285    4522 certs.go:385] copying /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.key.22db1849 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.key
	I0722 04:17:26.788462    4522 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/proxy-client.key
	I0722 04:17:26.788633    4522 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem (1338 bytes)
	W0722 04:17:26.788667    4522 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618_empty.pem, impossibly tiny 0 bytes
	I0722 04:17:26.788675    4522 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 04:17:26.788705    4522 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem (1078 bytes)
	I0722 04:17:26.788733    4522 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem (1123 bytes)
	I0722 04:17:26.788760    4522 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem (1675 bytes)
	I0722 04:17:26.788820    4522 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem (1708 bytes)
	I0722 04:17:26.789202    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 04:17:26.796886    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 04:17:26.804365    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 04:17:26.811987    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 04:17:26.818778    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 04:17:26.825577    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 04:17:26.832635    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 04:17:26.840096    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0722 04:17:26.846989    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /usr/share/ca-certificates/16182.pem (1708 bytes)
	I0722 04:17:26.853566    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 04:17:26.860677    4522 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem --> /usr/share/ca-certificates/1618.pem (1338 bytes)
	I0722 04:17:26.867572    4522 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 04:17:26.872487    4522 ssh_runner.go:195] Run: openssl version
	I0722 04:17:26.874298    4522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16182.pem && ln -fs /usr/share/ca-certificates/16182.pem /etc/ssl/certs/16182.pem"
	I0722 04:17:26.877549    4522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16182.pem
	I0722 04:17:26.879054    4522 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:36 /usr/share/ca-certificates/16182.pem
	I0722 04:17:26.879072    4522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16182.pem
	I0722 04:17:26.880655    4522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16182.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 04:17:26.883497    4522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 04:17:26.886457    4522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:17:26.887877    4522 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:17:26.887895    4522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:17:26.889671    4522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 04:17:26.892760    4522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1618.pem && ln -fs /usr/share/ca-certificates/1618.pem /etc/ssl/certs/1618.pem"
	I0722 04:17:26.895922    4522 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1618.pem
	I0722 04:17:26.897331    4522 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:36 /usr/share/ca-certificates/1618.pem
	I0722 04:17:26.897352    4522 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1618.pem
	I0722 04:17:26.899034    4522 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1618.pem /etc/ssl/certs/51391683.0"
	I0722 04:17:26.901562    4522 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 04:17:26.903324    4522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 04:17:26.905003    4522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 04:17:26.906730    4522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 04:17:26.908452    4522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 04:17:26.910396    4522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 04:17:26.912233    4522 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 04:17:26.914100    4522 kubeadm.go:392] StartCluster: {Name:running-upgrade-724000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50263 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-724000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:17:26.914169    4522 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:17:26.924760    4522 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 04:17:26.928328    4522 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 04:17:26.928333    4522 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 04:17:26.928353    4522 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 04:17:26.931516    4522 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:17:26.931740    4522 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-724000" does not appear in /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:17:26.931794    4522 kubeconfig.go:62] /Users/jenkins/minikube-integration/19313-1127/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-724000" cluster setting kubeconfig missing "running-upgrade-724000" context setting]
	I0722 04:17:26.931917    4522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:17:26.932598    4522 kapi.go:59] client config for running-upgrade-724000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102577790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:17:26.932915    4522 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 04:17:26.935616    4522 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-724000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0722 04:17:26.935621    4522 kubeadm.go:1160] stopping kube-system containers ...
	I0722 04:17:26.935658    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:17:26.948524    4522 docker.go:483] Stopping containers: [d2d617658892 5045415bfa4b 1bdf989f8c59 dcd4fca602a8 e9453e46cb8b 4938223a0ec8 ead6a09f5bd6 635d1618db93 7f835e78e280 486d5242cb54 b0bd19f3f712 19e4254a9a68 39ed527750ca 2b03104b6edb 31e229b2e880 fe8ece5f8fe4 9bb9d942e41c e09297d3d157]
	I0722 04:17:26.948596    4522 ssh_runner.go:195] Run: docker stop d2d617658892 5045415bfa4b 1bdf989f8c59 dcd4fca602a8 e9453e46cb8b 4938223a0ec8 ead6a09f5bd6 635d1618db93 7f835e78e280 486d5242cb54 b0bd19f3f712 19e4254a9a68 39ed527750ca 2b03104b6edb 31e229b2e880 fe8ece5f8fe4 9bb9d942e41c e09297d3d157
	I0722 04:17:27.384611    4522 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 04:17:27.489881    4522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:17:27.493191    4522 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5643 Jul 22 11:16 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Jul 22 11:16 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Jul 22 11:17 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Jul 22 11:16 /etc/kubernetes/scheduler.conf
	
	I0722 04:17:27.493217    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/admin.conf
	I0722 04:17:27.496396    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:17:27.496427    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 04:17:27.499634    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/kubelet.conf
	I0722 04:17:27.502664    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:17:27.502688    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 04:17:27.505628    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/controller-manager.conf
	I0722 04:17:27.508532    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:17:27.508559    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:17:27.511712    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/scheduler.conf
	I0722 04:17:27.514954    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:17:27.514979    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:17:27.518110    4522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:17:27.520802    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:17:27.555791    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:17:28.214372    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:17:28.414629    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:17:28.435888    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:17:28.458604    4522 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:17:28.458684    4522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:17:28.961084    4522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:17:29.460762    4522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:17:29.465392    4522 api_server.go:72] duration metric: took 1.006802166s to wait for apiserver process to appear ...
	I0722 04:17:29.465401    4522 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:17:29.465409    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:17:34.467492    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:17:34.467575    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:17:39.468196    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:17:39.468239    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:17:44.468749    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:17:44.468844    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:17:49.469999    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:17:49.470191    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:17:54.471642    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:17:54.471728    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:17:59.473648    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:17:59.473731    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:18:04.476171    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:18:04.476268    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:18:09.478374    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:18:09.478513    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:18:14.481116    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:18:14.481164    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:18:19.483569    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:18:19.483645    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:18:24.485448    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:18:24.485529    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:18:29.488089    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:18:29.488575    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:18:29.529236    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:18:29.529374    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:18:29.550635    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:18:29.550738    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:18:29.565762    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:18:29.565843    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:18:29.578056    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:18:29.578125    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:18:29.589477    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:18:29.589539    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:18:29.600069    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:18:29.600159    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:18:29.609761    4522 logs.go:276] 0 containers: []
	W0722 04:18:29.609772    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:18:29.609834    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:18:29.620007    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:18:29.620028    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:18:29.620033    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:18:29.691450    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:18:29.691464    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:18:29.705607    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:18:29.705621    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:18:29.719137    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:18:29.719149    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:18:29.730791    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:18:29.730803    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:18:29.735016    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:18:29.735025    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:18:29.746614    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:18:29.746626    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:18:29.761338    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:18:29.761349    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:18:29.772608    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:18:29.772619    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:18:29.789826    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:18:29.789837    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:18:29.802124    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:18:29.802138    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:18:29.813412    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:18:29.813423    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:18:29.825048    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:18:29.825061    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:18:29.837304    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:18:29.837315    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:18:29.849951    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:18:29.849962    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:18:29.862460    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:18:29.862470    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:18:29.888880    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:18:29.888890    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:18:29.924561    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:18:29.924656    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:18:29.925376    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:18:29.925383    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:18:29.925409    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:18:29.925412    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:18:29.925417    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:18:29.925420    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:18:29.925422    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:18:39.929569    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:18:44.932418    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:18:44.932869    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:18:44.974889    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:18:44.975021    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:18:44.996491    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:18:44.996610    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:18:45.012480    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:18:45.012551    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:18:45.025591    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:18:45.025665    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:18:45.036817    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:18:45.036886    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:18:45.047156    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:18:45.047233    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:18:45.062644    4522 logs.go:276] 0 containers: []
	W0722 04:18:45.062655    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:18:45.062723    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:18:45.074208    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:18:45.074226    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:18:45.074232    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:18:45.086380    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:18:45.086393    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:18:45.098121    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:18:45.098135    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:18:45.109711    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:18:45.109722    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:18:45.135660    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:18:45.135670    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:18:45.149658    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:18:45.149674    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:18:45.161058    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:18:45.161072    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:18:45.172092    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:18:45.172103    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:18:45.176935    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:18:45.176943    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:18:45.211587    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:18:45.211599    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:18:45.223784    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:18:45.223797    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:18:45.235625    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:18:45.235634    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:18:45.246564    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:18:45.246575    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:18:45.264646    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:18:45.264659    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:18:45.280902    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:18:45.280911    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:18:45.315574    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:18:45.315672    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:18:45.316425    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:18:45.316432    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:18:45.330466    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:18:45.330476    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:18:45.348705    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:18:45.348715    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:18:45.348738    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:18:45.348748    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:18:45.348752    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:18:45.348758    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:18:45.348765    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:18:55.352949    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:19:00.355514    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:19:00.356009    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:19:00.395149    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:19:00.395293    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:19:00.418664    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:19:00.418796    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:19:00.433565    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:19:00.433640    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:19:00.445907    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:19:00.445978    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:19:00.457696    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:19:00.457767    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:19:00.468728    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:19:00.468802    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:19:00.479694    4522 logs.go:276] 0 containers: []
	W0722 04:19:00.479707    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:19:00.479763    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:19:00.489901    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:19:00.489917    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:19:00.489923    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:19:00.494338    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:19:00.494346    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:19:00.532070    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:19:00.532086    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:19:00.548003    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:19:00.548014    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:19:00.563787    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:19:00.563797    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:19:00.575826    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:19:00.575837    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:19:00.587187    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:19:00.587197    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:19:00.621588    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:00.621681    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:00.622376    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:19:00.622380    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:19:00.636844    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:19:00.636855    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:19:00.648682    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:19:00.648693    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:19:00.665933    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:19:00.665944    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:19:00.677183    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:19:00.677193    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:19:00.703601    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:19:00.703619    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:19:00.714836    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:19:00.714849    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:19:00.725896    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:19:00.725908    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:19:00.738099    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:19:00.738110    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:19:00.749403    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:19:00.749417    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:19:00.760996    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:00.761007    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:19:00.761034    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:19:00.761039    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:00.761042    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:00.761047    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:00.761050    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:10.760207    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:19:15.759904    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:19:15.760100    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:19:15.778631    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:19:15.778714    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:19:15.792805    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:19:15.792874    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:19:15.804114    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:19:15.804183    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:19:15.816202    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:19:15.816271    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:19:15.827941    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:19:15.827998    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:19:15.839849    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:19:15.839897    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:19:15.851354    4522 logs.go:276] 0 containers: []
	W0722 04:19:15.851367    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:19:15.851421    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:19:15.861749    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:19:15.861770    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:19:15.861776    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:19:15.873944    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:19:15.873954    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:19:15.885294    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:19:15.885310    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:19:15.920980    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:19:15.920991    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:19:15.933045    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:19:15.933056    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:19:15.947204    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:19:15.947213    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:19:15.958115    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:19:15.958130    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:19:15.969622    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:19:15.969632    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:19:16.004004    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:16.004108    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:16.004851    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:19:16.004855    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:19:16.015643    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:19:16.015656    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:19:16.027846    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:19:16.027858    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:19:16.041710    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:19:16.041720    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:19:16.055158    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:19:16.055169    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:19:16.067244    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:19:16.067257    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:19:16.072369    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:19:16.072380    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:19:16.090013    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:19:16.090023    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:19:16.101119    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:19:16.101129    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:19:16.127368    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:16.127382    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:19:16.127408    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:19:16.127413    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:16.127417    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:16.127421    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:16.127424    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:26.127897    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:19:31.129592    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:19:31.130040    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:19:31.170538    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:19:31.170671    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:19:31.192665    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:19:31.192772    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:19:31.207328    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:19:31.207402    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:19:31.220125    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:19:31.220199    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:19:31.230519    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:19:31.230576    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:19:31.244980    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:19:31.245047    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:19:31.254854    4522 logs.go:276] 0 containers: []
	W0722 04:19:31.254866    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:19:31.254916    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:19:31.265679    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:19:31.265696    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:19:31.265701    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:19:31.302463    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:19:31.302478    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:19:31.316430    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:19:31.316442    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:19:31.328054    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:19:31.328066    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:19:31.339116    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:19:31.339129    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:19:31.375438    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:31.375532    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:31.376225    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:19:31.376230    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:19:31.389915    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:19:31.389927    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:19:31.401478    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:19:31.401490    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:19:31.406354    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:19:31.406364    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:19:31.417761    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:19:31.417775    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:19:31.429239    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:19:31.429253    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:19:31.440214    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:19:31.440228    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:19:31.463988    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:19:31.463997    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:19:31.478101    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:19:31.478113    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:19:31.489645    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:19:31.489657    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:19:31.506934    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:19:31.506946    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:19:31.518736    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:19:31.518750    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:19:31.531116    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:31.531129    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:19:31.531159    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:19:31.531163    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:31.531167    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:31.531171    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:31.531174    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:41.533862    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:19:46.535719    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:19:46.535987    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:19:46.564937    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:19:46.565044    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:19:46.580784    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:19:46.580861    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:19:46.593544    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:19:46.593614    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:19:46.608835    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:19:46.608895    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:19:46.619168    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:19:46.619237    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:19:46.629823    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:19:46.629887    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:19:46.641328    4522 logs.go:276] 0 containers: []
	W0722 04:19:46.641339    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:19:46.641386    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:19:46.651933    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:19:46.651955    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:19:46.651961    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:19:46.666422    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:19:46.666434    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:19:46.700809    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:46.700902    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:46.701599    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:19:46.701603    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:19:46.705704    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:19:46.705711    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:19:46.717038    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:19:46.717049    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:19:46.752247    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:19:46.752264    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:19:46.768168    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:19:46.768179    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:19:46.784258    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:19:46.784270    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:19:46.795558    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:19:46.795572    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:19:46.819294    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:19:46.819301    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:19:46.832943    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:19:46.832955    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:19:46.844181    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:19:46.844193    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:19:46.855586    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:19:46.855596    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:19:46.875065    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:19:46.875076    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:19:46.886189    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:19:46.886199    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:19:46.898836    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:19:46.898846    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:19:46.912189    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:19:46.912201    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:19:46.926536    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:46.926547    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:19:46.926571    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:19:46.926575    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:19:46.926578    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:19:46.926582    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:46.926585    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:56.928974    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:01.931185    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:01.931393    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:20:01.959321    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:20:01.959436    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:20:01.974989    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:20:01.975067    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:20:01.988369    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:20:01.988441    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:20:02.000979    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:20:02.001053    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:20:02.011708    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:20:02.011773    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:20:02.023160    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:20:02.023232    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:20:02.033672    4522 logs.go:276] 0 containers: []
	W0722 04:20:02.033685    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:20:02.033751    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:20:02.043791    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:20:02.043809    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:20:02.043815    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:20:02.055883    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:20:02.055896    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:20:02.070050    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:20:02.070061    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:20:02.081067    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:20:02.081079    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:20:02.092794    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:20:02.092808    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:20:02.103955    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:20:02.103968    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:20:02.118990    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:20:02.118999    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:20:02.123537    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:20:02.123545    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:20:02.134903    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:20:02.134913    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:20:02.153021    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:20:02.153031    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:20:02.177504    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:20:02.177514    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:20:02.212678    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:02.212773    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:02.213472    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:20:02.213477    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:20:02.227852    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:20:02.227865    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:20:02.240769    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:20:02.240783    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:20:02.252297    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:20:02.252307    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:20:02.263888    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:20:02.263901    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:20:02.276854    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:20:02.276872    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:20:02.314001    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:02.314018    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:20:02.314052    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:20:02.314062    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:02.314066    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:02.314070    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:02.314073    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:12.318020    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:17.320688    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:17.320859    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:20:17.333687    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:20:17.333759    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:20:17.344332    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:20:17.344399    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:20:17.358086    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:20:17.358161    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:20:17.372076    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:20:17.372155    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:20:17.383192    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:20:17.383264    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:20:17.401253    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:20:17.401322    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:20:17.416918    4522 logs.go:276] 0 containers: []
	W0722 04:20:17.416930    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:20:17.416983    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:20:17.428687    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:20:17.428708    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:20:17.428714    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:20:17.445478    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:20:17.445489    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:20:17.461285    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:20:17.461296    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:20:17.485614    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:20:17.485628    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:20:17.500948    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:20:17.500963    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:20:17.513422    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:20:17.513435    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:20:17.525566    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:20:17.525577    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:20:17.544666    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:20:17.544676    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:20:17.580388    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:17.580491    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:17.581233    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:20:17.581240    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:20:17.594285    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:20:17.594299    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:20:17.612039    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:20:17.612066    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:20:17.625139    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:20:17.625151    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:20:17.630071    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:20:17.630084    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:20:17.649109    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:20:17.649123    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:20:17.664480    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:20:17.664498    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:20:17.682974    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:20:17.682989    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:20:17.696734    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:20:17.696747    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:20:17.737172    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:17.737184    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:20:17.737212    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:20:17.737229    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:17.737234    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:17.737238    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:17.737242    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:27.741209    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:32.743397    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:32.743495    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:20:32.755756    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:20:32.755828    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:20:32.768060    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:20:32.768131    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:20:32.784123    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:20:32.784193    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:20:32.796431    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:20:32.796504    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:20:32.809051    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:20:32.809121    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:20:32.821275    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:20:32.821359    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:20:32.833050    4522 logs.go:276] 0 containers: []
	W0722 04:20:32.833062    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:20:32.833122    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:20:32.847142    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:20:32.847166    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:20:32.847172    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:20:32.891361    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:20:32.891378    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:20:32.913344    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:20:32.913354    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:20:32.927040    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:20:32.927050    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:20:32.939388    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:20:32.939400    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:20:32.952157    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:20:32.952169    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:20:32.987820    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:32.987914    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:32.988651    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:20:32.988656    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:20:33.003304    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:20:33.003314    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:20:33.014869    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:20:33.014881    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:20:33.026573    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:20:33.026585    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:20:33.038997    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:20:33.039012    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:20:33.051146    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:20:33.051161    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:20:33.066075    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:20:33.066088    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:20:33.077672    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:20:33.077684    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:20:33.101290    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:20:33.101298    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:20:33.105858    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:20:33.105865    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:20:33.120243    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:20:33.120253    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:20:33.135352    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:33.135366    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:20:33.135393    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:20:33.135399    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:33.135405    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:33.135442    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:33.135447    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:43.138992    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:48.141088    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:48.141299    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:20:48.159183    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:20:48.159264    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:20:48.172199    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:20:48.172285    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:20:48.183691    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:20:48.183766    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:20:48.194757    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:20:48.194832    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:20:48.205256    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:20:48.205331    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:20:48.215620    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:20:48.215693    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:20:48.226348    4522 logs.go:276] 0 containers: []
	W0722 04:20:48.226360    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:20:48.226430    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:20:48.236489    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:20:48.236507    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:20:48.236513    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:20:48.270785    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:20:48.270798    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:20:48.290216    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:20:48.290228    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:20:48.302190    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:20:48.302205    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:20:48.313986    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:20:48.313997    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:20:48.325661    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:20:48.325672    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:20:48.337448    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:20:48.337463    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:20:48.354039    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:20:48.354049    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:20:48.377378    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:20:48.377387    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:20:48.390719    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:20:48.390731    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:20:48.404235    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:20:48.404248    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:20:48.421598    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:20:48.421610    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:20:48.434559    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:20:48.434571    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:20:48.455113    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:20:48.455127    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:20:48.489107    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:48.489203    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:48.489946    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:20:48.489952    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:20:48.494050    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:20:48.494058    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:20:48.510451    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:20:48.510461    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:20:48.521693    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:48.521704    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:20:48.521733    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:20:48.521737    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:48.521743    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:48.521751    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:48.521755    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:58.525728    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:03.527867    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:03.527985    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:21:03.538986    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:21:03.539069    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:21:03.549926    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:21:03.549992    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:21:03.560560    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:21:03.560629    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:21:03.571325    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:21:03.571402    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:21:03.581897    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:21:03.581969    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:21:03.592522    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:21:03.592598    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:21:03.602472    4522 logs.go:276] 0 containers: []
	W0722 04:21:03.602484    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:21:03.602537    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:21:03.617143    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:21:03.617160    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:21:03.617165    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:21:03.640417    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:21:03.640428    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:21:03.675035    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:03.675127    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:03.675819    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:21:03.675824    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:21:03.686914    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:21:03.686924    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:21:03.698890    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:21:03.698899    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:21:03.716604    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:21:03.716612    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:21:03.720805    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:21:03.720812    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:21:03.732359    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:21:03.732370    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:21:03.744363    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:21:03.744374    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:21:03.756351    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:21:03.756363    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:21:03.770705    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:21:03.770716    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:21:03.782240    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:21:03.782266    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:21:03.793031    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:21:03.793044    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:21:03.805977    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:21:03.805990    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:21:03.843639    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:21:03.843650    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:21:03.857794    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:21:03.857808    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:21:03.869973    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:21:03.869983    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:21:03.883369    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:03.883379    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:21:03.883407    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:21:03.883415    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:03.883420    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:03.883424    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:03.883427    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:21:13.887434    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:18.889886    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:18.890175    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:21:18.912962    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:21:18.913085    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:21:18.929422    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:21:18.929501    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:21:18.941635    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:21:18.941709    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:21:18.954299    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:21:18.954374    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:21:18.969979    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:21:18.970044    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:21:18.980298    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:21:18.980365    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:21:18.990505    4522 logs.go:276] 0 containers: []
	W0722 04:21:18.990517    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:21:18.990572    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:21:19.004850    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:21:19.004868    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:21:19.004874    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:21:19.039170    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:19.039268    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:19.040020    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:21:19.040028    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:21:19.080278    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:21:19.080290    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:21:19.095720    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:21:19.095731    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:21:19.109232    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:21:19.109243    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:21:19.120430    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:21:19.120445    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:21:19.132643    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:21:19.132655    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:21:19.157345    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:21:19.157356    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:21:19.169235    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:21:19.169252    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:21:19.190732    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:21:19.190744    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:21:19.206472    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:21:19.206483    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:21:19.217680    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:21:19.217692    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:21:19.221934    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:21:19.221939    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:21:19.233172    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:21:19.233183    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:21:19.245370    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:21:19.245385    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:21:19.263060    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:21:19.263071    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:21:19.277271    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:21:19.277282    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:21:19.289457    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:19.289467    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:21:19.289495    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:21:19.289500    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:19.289503    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:19.289509    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:19.289511    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:21:29.293518    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:34.295814    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:34.295899    4522 kubeadm.go:597] duration metric: took 4m7.386838125s to restartPrimaryControlPlane
	W0722 04:21:34.295940    4522 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 04:21:34.295959    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0722 04:21:35.289024    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:21:35.294110    4522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:21:35.296981    4522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:21:35.299933    4522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 04:21:35.299939    4522 kubeadm.go:157] found existing configuration files:
	
	I0722 04:21:35.299964    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/admin.conf
	I0722 04:21:35.302400    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 04:21:35.302425    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 04:21:35.305004    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/kubelet.conf
	I0722 04:21:35.307885    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 04:21:35.307907    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 04:21:35.310577    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/controller-manager.conf
	I0722 04:21:35.313098    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 04:21:35.313117    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:21:35.316061    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/scheduler.conf
	I0722 04:21:35.318713    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 04:21:35.318735    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:21:35.321139    4522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 04:21:35.338968    4522 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0722 04:21:35.339020    4522 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 04:21:35.387507    4522 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 04:21:35.387563    4522 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 04:21:35.387633    4522 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 04:21:35.437994    4522 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 04:21:35.441424    4522 out.go:204]   - Generating certificates and keys ...
	I0722 04:21:35.441463    4522 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 04:21:35.441496    4522 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 04:21:35.441548    4522 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 04:21:35.441584    4522 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 04:21:35.441620    4522 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 04:21:35.441651    4522 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 04:21:35.441688    4522 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 04:21:35.441721    4522 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 04:21:35.441758    4522 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 04:21:35.441797    4522 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 04:21:35.441817    4522 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 04:21:35.441843    4522 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 04:21:35.620554    4522 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 04:21:35.718704    4522 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 04:21:35.761862    4522 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 04:21:35.842051    4522 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 04:21:35.872233    4522 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 04:21:35.872567    4522 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 04:21:35.872646    4522 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 04:21:35.967959    4522 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 04:21:35.971118    4522 out.go:204]   - Booting up control plane ...
	I0722 04:21:35.971163    4522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 04:21:35.971203    4522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 04:21:35.971265    4522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 04:21:35.971303    4522 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 04:21:35.971502    4522 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 04:21:40.973720    4522 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002844 seconds
	I0722 04:21:40.973946    4522 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 04:21:40.982578    4522 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 04:21:41.492749    4522 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 04:21:41.492870    4522 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-724000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 04:21:41.997291    4522 kubeadm.go:310] [bootstrap-token] Using token: 3b2ac4.5cymdjmizcvjhc80
	I0722 04:21:42.000868    4522 out.go:204]   - Configuring RBAC rules ...
	I0722 04:21:42.000951    4522 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 04:21:42.002958    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 04:21:42.008485    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 04:21:42.009510    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 04:21:42.010365    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 04:21:42.011207    4522 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 04:21:42.015694    4522 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 04:21:42.182472    4522 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 04:21:42.408773    4522 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 04:21:42.409775    4522 kubeadm.go:310] 
	I0722 04:21:42.409807    4522 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 04:21:42.409819    4522 kubeadm.go:310] 
	I0722 04:21:42.409870    4522 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 04:21:42.409873    4522 kubeadm.go:310] 
	I0722 04:21:42.409887    4522 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 04:21:42.409918    4522 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 04:21:42.409943    4522 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 04:21:42.409948    4522 kubeadm.go:310] 
	I0722 04:21:42.410049    4522 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 04:21:42.410054    4522 kubeadm.go:310] 
	I0722 04:21:42.410100    4522 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 04:21:42.410102    4522 kubeadm.go:310] 
	I0722 04:21:42.410131    4522 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 04:21:42.410174    4522 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 04:21:42.410228    4522 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 04:21:42.410235    4522 kubeadm.go:310] 
	I0722 04:21:42.410274    4522 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 04:21:42.410312    4522 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 04:21:42.410318    4522 kubeadm.go:310] 
	I0722 04:21:42.410357    4522 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3b2ac4.5cymdjmizcvjhc80 \
	I0722 04:21:42.410427    4522 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 \
	I0722 04:21:42.410444    4522 kubeadm.go:310] 	--control-plane 
	I0722 04:21:42.410447    4522 kubeadm.go:310] 
	I0722 04:21:42.410508    4522 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 04:21:42.410512    4522 kubeadm.go:310] 
	I0722 04:21:42.410553    4522 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3b2ac4.5cymdjmizcvjhc80 \
	I0722 04:21:42.410642    4522 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 
	I0722 04:21:42.410809    4522 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 04:21:42.410819    4522 cni.go:84] Creating CNI manager for ""
	I0722 04:21:42.410827    4522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:21:42.414563    4522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 04:21:42.421487    4522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 04:21:42.424393    4522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 04:21:42.429080    4522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 04:21:42.429125    4522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 04:21:42.429160    4522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-724000 minikube.k8s.io/updated_at=2024_07_22T04_21_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=running-upgrade-724000 minikube.k8s.io/primary=true
	I0722 04:21:42.474997    4522 kubeadm.go:1113] duration metric: took 45.907167ms to wait for elevateKubeSystemPrivileges
	I0722 04:21:42.475001    4522 ops.go:34] apiserver oom_adj: -16
	I0722 04:21:42.475093    4522 kubeadm.go:394] duration metric: took 4m15.580408833s to StartCluster
	I0722 04:21:42.475109    4522 settings.go:142] acquiring lock: {Name:mk640939e683dda0ffda5b348284f38e73fbc066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:21:42.475205    4522 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:21:42.475613    4522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:21:42.475828    4522 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:21:42.475836    4522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 04:21:42.475877    4522 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-724000"
	I0722 04:21:42.475913    4522 config.go:182] Loaded profile config "running-upgrade-724000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:21:42.475928    4522 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-724000"
	I0722 04:21:42.475941    4522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-724000"
	I0722 04:21:42.475948    4522 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-724000"
	W0722 04:21:42.475953    4522 addons.go:243] addon storage-provisioner should already be in state true
	I0722 04:21:42.475966    4522 host.go:66] Checking if "running-upgrade-724000" exists ...
	I0722 04:21:42.476196    4522 retry.go:31] will retry after 1.17937289s: connect: dial unix /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/monitor: connect: connection refused
	I0722 04:21:42.476864    4522 kapi.go:59] client config for running-upgrade-724000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102577790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:21:42.476978    4522 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-724000"
	W0722 04:21:42.476983    4522 addons.go:243] addon default-storageclass should already be in state true
	I0722 04:21:42.476990    4522 host.go:66] Checking if "running-upgrade-724000" exists ...
	I0722 04:21:42.477509    4522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 04:21:42.477515    4522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 04:21:42.477521    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	I0722 04:21:42.479428    4522 out.go:177] * Verifying Kubernetes components...
	I0722 04:21:42.486380    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:21:42.579576    4522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:21:42.584442    4522 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:21:42.584485    4522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:21:42.589146    4522 api_server.go:72] duration metric: took 113.308083ms to wait for apiserver process to appear ...
	I0722 04:21:42.589154    4522 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:21:42.589161    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:42.637036    4522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 04:21:43.662465    4522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:21:43.666465    4522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:21:43.666472    4522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 04:21:43.666481    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	I0722 04:21:43.706180    4522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:21:47.591188    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:47.591231    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:52.591445    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:52.591510    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:57.591783    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:57.591823    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:02.592245    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:02.592298    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:07.593164    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:07.593218    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:12.594014    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:12.594060    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0722 04:22:12.951679    4522 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0722 04:22:12.955358    4522 out.go:177] * Enabled addons: storage-provisioner
	I0722 04:22:12.962262    4522 addons.go:510] duration metric: took 30.48692575s for enable addons: enabled=[storage-provisioner]
	I0722 04:22:17.595222    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:17.595268    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:22.596792    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:22.596822    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:27.598580    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:27.598614    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:32.600685    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:32.600708    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:37.602831    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:37.602865    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:42.605029    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:42.605127    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:42.623016    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:22:42.623072    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:42.635319    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:22:42.635404    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:42.647534    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:22:42.647609    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:42.659342    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:22:42.659421    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:42.670597    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:22:42.670665    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:42.682022    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:22:42.682101    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:42.692446    4522 logs.go:276] 0 containers: []
	W0722 04:22:42.692457    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:42.692518    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:42.703893    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:22:42.703911    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:22:42.703917    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:22:42.716458    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:42.716470    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:42.742740    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:22:42.742757    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:42.758219    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:42.758231    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:42.763223    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:22:42.763231    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:22:42.779713    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:22:42.779723    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:22:42.794972    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:22:42.794983    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:22:42.809813    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:22:42.809826    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:22:42.823133    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:22:42.823145    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:22:42.835491    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:22:42.835503    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:22:42.853921    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:22:42.853933    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:22:42.867462    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:42.867473    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:22:42.886667    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.886761    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.902945    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.903040    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:42.904255    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:42.904263    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:42.941922    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:42.941933    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:22:42.941960    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:22:42.941966    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.941979    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.941983    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.941997    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:42.942001    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:42.942003    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:22:52.945726    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:57.947911    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:57.948004    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:57.959418    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:22:57.959484    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:57.970685    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:22:57.970761    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:57.981307    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:22:57.981379    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:57.991739    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:22:57.991804    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:58.002344    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:22:58.002418    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:58.012506    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:22:58.012583    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:58.022497    4522 logs.go:276] 0 containers: []
	W0722 04:22:58.022508    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:58.022565    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:58.032188    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:22:58.032201    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:22:58.032206    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:22:58.046296    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:22:58.046311    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:22:58.058317    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:22:58.058330    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:22:58.076796    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:22:58.076809    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:22:58.087903    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:58.087916    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:58.113052    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:58.113060    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:22:58.130875    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.130969    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.146669    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.146761    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:58.147937    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:58.147941    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:58.183918    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:22:58.183931    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:22:58.202283    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:22:58.202294    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:22:58.214076    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:22:58.214087    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:22:58.229242    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:22:58.229256    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:22:58.244129    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:22:58.244143    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:58.255737    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:58.255751    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:58.260483    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:58.260492    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:22:58.260516    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:22:58.260521    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.260524    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.260529    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.260533    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:58.260547    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:58.260550    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:23:08.264523    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:13.266820    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:13.267228    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:13.306393    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:13.306559    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:13.328794    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:13.328881    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:13.346801    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:23:13.346870    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:13.359554    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:13.359630    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:13.369903    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:13.369969    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:13.380167    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:13.380242    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:13.390730    4522 logs.go:276] 0 containers: []
	W0722 04:23:13.390744    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:13.390800    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:13.401632    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:13.401649    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:13.401655    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:13.412918    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:13.412929    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:13.430985    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:13.430999    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:13.456001    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:13.456009    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:13.467470    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:13.467480    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:13.473591    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:13.473600    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:13.490446    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:13.490457    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:13.503999    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:13.504012    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:13.520128    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:13.520140    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:13.538276    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:13.538287    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:13.555629    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.555723    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.571065    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.571157    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:13.572290    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:13.572294    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:13.607920    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:13.607931    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:13.620667    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:13.620677    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:13.632165    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:13.632176    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:13.632211    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:23:13.632217    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.632220    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.632225    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.632326    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:13.632334    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:13.632338    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:23:23.636395    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:28.638968    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:28.639208    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:28.665554    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:28.665641    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:28.677579    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:28.677651    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:28.688231    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:23:28.688298    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:28.698465    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:28.698535    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:28.710578    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:28.710643    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:28.725256    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:28.725323    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:28.737468    4522 logs.go:276] 0 containers: []
	W0722 04:23:28.737480    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:28.737539    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:28.748174    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:28.748188    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:28.748194    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:28.752883    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:28.752894    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:28.788926    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:28.788937    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:28.800132    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:28.800143    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:28.815985    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:28.815996    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:28.827455    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:28.827466    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:28.844391    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.844485    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.859910    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.860002    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:28.861217    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:28.861225    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:28.877404    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:28.877421    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:28.895329    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:28.895345    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:28.908292    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:28.908303    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:28.920290    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:28.920303    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:28.939016    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:28.939032    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:28.951038    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:28.951051    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:28.975698    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:28.975708    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:28.975737    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:23:28.975741    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.975745    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.975750    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.975769    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:28.975782    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:28.975786    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:23:38.977868    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:43.980170    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:43.980331    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:43.997367    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:43.997446    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:44.010065    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:44.010139    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:44.022061    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:23:44.022132    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:44.032901    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:44.032969    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:44.043549    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:44.043622    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:44.053770    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:44.053839    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:44.063925    4522 logs.go:276] 0 containers: []
	W0722 04:23:44.063937    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:44.063998    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:44.074779    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:44.074797    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:44.074803    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:44.091715    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:44.091725    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:44.105670    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:44.105680    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:44.126908    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:44.126922    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:44.139075    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:44.139085    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:44.156616    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:44.156625    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:44.168505    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:44.168515    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:44.186472    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.186567    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.201809    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.201901    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:44.203074    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:44.203079    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:44.237899    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:44.237907    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:44.261532    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:44.261540    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:44.272864    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:44.272876    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:44.287191    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:44.287202    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:44.291935    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:44.291944    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:44.303279    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:44.303288    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:44.303313    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:23:44.303318    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.303322    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.303326    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.303330    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:44.303334    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:44.303336    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:23:54.307387    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:59.309573    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:59.309797    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:59.327374    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:59.327465    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:59.340409    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:59.340486    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:59.352105    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:23:59.352177    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:59.362560    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:59.362627    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:59.372972    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:59.373049    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:59.384019    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:59.384089    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:59.394526    4522 logs.go:276] 0 containers: []
	W0722 04:23:59.394540    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:59.394600    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:59.404912    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:59.404931    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:59.404937    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:59.418861    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:59.418872    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:59.432899    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:23:59.432909    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:23:59.446502    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:59.446514    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:59.457894    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:59.457904    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:59.472933    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:59.472945    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:59.484474    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:59.484487    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:59.500028    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.500120    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.515591    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.515682    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:59.516891    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:59.516896    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:59.551993    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:23:59.552002    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:23:59.563462    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:59.563472    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:59.575480    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:59.575489    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:59.592757    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:59.592766    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:59.617898    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:59.617906    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:59.622672    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:59.622681    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:59.638252    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:59.638263    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:59.650151    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:59.650162    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:59.650189    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:23:59.650194    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.650213    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.650223    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.650229    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:59.650232    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:59.650235    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:09.652949    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:14.655106    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:14.655321    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:14.676654    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:24:14.676750    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:14.690153    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:24:14.690230    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:14.702100    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:24:14.702166    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:14.712704    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:24:14.712766    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:14.723076    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:24:14.723149    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:14.734184    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:24:14.734260    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:14.744061    4522 logs.go:276] 0 containers: []
	W0722 04:24:14.744071    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:14.744122    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:14.754928    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:24:14.754946    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:24:14.754952    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:24:14.769122    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:24:14.769136    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:24:14.786965    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:24:14.786977    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:24:14.801457    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:24:14.801468    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:24:14.814581    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:24:14.814593    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:24:14.826014    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:24:14.826026    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:24:14.837707    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:24:14.837720    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:14.849660    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:14.849673    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:14.854472    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:24:14.854481    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:24:14.866281    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:24:14.866293    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:24:14.885728    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:14.885738    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:14.910900    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:14.910908    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:24:14.928691    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:14.928783    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:14.945010    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:14.945103    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:14.946321    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:14.946330    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:14.979521    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:24:14.979534    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:24:14.991521    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:24:14.991533    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:24:15.002909    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:15.002919    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:24:15.002947    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:24:15.002952    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:15.002960    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:15.002965    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:15.002969    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:15.002971    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:15.003012    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:25.007007    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:30.008525    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:30.008730    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:30.024954    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:24:30.025033    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:30.038343    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:24:30.038413    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:30.051643    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:24:30.051712    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:30.071466    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:24:30.071526    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:30.081889    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:24:30.081953    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:30.092565    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:24:30.092627    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:30.102699    4522 logs.go:276] 0 containers: []
	W0722 04:24:30.102713    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:30.102770    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:30.112950    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:24:30.112969    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:30.112973    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:30.138377    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:24:30.138387    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:24:30.150214    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:30.150226    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:30.184271    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:24:30.184282    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:24:30.195539    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:24:30.195553    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:24:30.207541    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:24:30.207555    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:30.218929    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:30.218943    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:30.223300    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:24:30.223310    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:24:30.241955    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:24:30.241968    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:24:30.259184    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:24:30.259196    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:24:30.271574    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:24:30.271585    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:24:30.292410    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:24:30.292420    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:24:30.304258    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:30.304268    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:24:30.322846    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.322950    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.338953    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.339051    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:30.340271    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:24:30.340281    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:24:30.364179    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:24:30.364191    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:24:30.385664    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:30.385675    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:24:30.385703    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:24:30.385710    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.385758    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.385770    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.385779    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:30.385800    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:30.385813    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:40.389797    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:45.392121    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:45.392406    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:45.426403    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:24:45.426529    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:45.445700    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:24:45.445794    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:45.476238    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:24:45.476317    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:45.488059    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:24:45.488131    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:45.498806    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:24:45.498879    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:45.508847    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:24:45.508906    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:45.519477    4522 logs.go:276] 0 containers: []
	W0722 04:24:45.519489    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:45.519550    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:45.529637    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:24:45.529655    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:45.529660    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:24:45.545924    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.546021    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.561362    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.561455    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:45.562654    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:24:45.562663    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:24:45.574557    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:45.574567    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:45.579147    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:24:45.579156    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:24:45.591244    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:24:45.591258    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:24:45.602740    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:24:45.602754    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:24:45.618171    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:45.618182    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:45.643426    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:24:45.643436    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:45.656499    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:45.656509    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:45.693812    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:24:45.693828    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:24:45.709755    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:24:45.709766    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:24:45.724733    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:24:45.724745    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:24:45.736065    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:24:45.736078    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:24:45.749102    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:24:45.749116    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:24:45.761292    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:24:45.761306    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:24:45.778648    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:45.778658    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:24:45.778685    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:24:45.778689    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.778693    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.778697    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.778701    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:45.778704    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:45.778707    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:55.782657    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:00.784795    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:00.784894    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:25:00.800206    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:25:00.800275    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:25:00.811748    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:25:00.811814    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:25:00.823299    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:25:00.823377    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:25:00.834640    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:25:00.834712    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:25:00.846156    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:25:00.846229    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:25:00.857562    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:25:00.857636    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:25:00.868591    4522 logs.go:276] 0 containers: []
	W0722 04:25:00.868606    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:25:00.868663    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:25:00.883966    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:25:00.883988    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:25:00.883993    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:25:00.898624    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:25:00.898638    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:25:00.911229    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:25:00.911241    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:25:00.936041    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:25:00.936059    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:25:00.948277    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:25:00.948290    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:25:00.953201    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:25:00.953207    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:25:00.964910    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:25:00.964924    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:25:00.981692    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:25:00.981707    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:25:00.998538    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:25:00.998550    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:25:01.011717    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:25:01.011732    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:25:01.030587    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:25:01.030605    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:25:01.042907    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:25:01.042923    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:25:01.062266    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.062370    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.078419    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.078513    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:01.079729    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:25:01.079744    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:25:01.118289    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:25:01.118305    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:25:01.132621    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:25:01.132635    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:25:01.148433    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:01.148448    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:25:01.148476    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:25:01.148481    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.148487    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.148491    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.148494    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:01.148497    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:01.148499    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:25:11.151995    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:16.154282    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:16.154431    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:25:16.165082    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:25:16.165148    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:25:16.176422    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:25:16.176498    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:25:16.193740    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:25:16.193806    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:25:16.204594    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:25:16.204663    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:25:16.215102    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:25:16.215174    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:25:16.225640    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:25:16.225707    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:25:16.236215    4522 logs.go:276] 0 containers: []
	W0722 04:25:16.236228    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:25:16.236284    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:25:16.247077    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:25:16.247098    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:25:16.247103    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:25:16.265609    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.265703    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.281670    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.281762    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:16.282973    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:25:16.282978    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:25:16.300379    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:25:16.300389    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:25:16.312252    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:25:16.312263    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:25:16.328008    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:25:16.328024    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:25:16.351678    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:25:16.351688    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:25:16.363392    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:25:16.363403    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:25:16.368253    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:25:16.368262    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:25:16.382918    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:25:16.382930    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:25:16.395151    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:25:16.395163    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:25:16.407057    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:25:16.407068    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:25:16.442740    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:25:16.442754    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:25:16.457439    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:25:16.457450    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:25:16.474746    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:25:16.474757    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:25:16.486637    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:25:16.486648    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:25:16.504282    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:16.504292    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:25:16.504319    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:25:16.504323    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.504327    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.504332    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.504335    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:16.504369    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:16.504389    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:25:26.508375    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:31.510636    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:31.510844    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:25:31.530741    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:25:31.530833    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:25:31.545082    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:25:31.545157    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:25:31.557613    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:25:31.557684    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:25:31.567748    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:25:31.567809    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:25:31.578737    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:25:31.578794    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:25:31.589296    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:25:31.589361    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:25:31.599453    4522 logs.go:276] 0 containers: []
	W0722 04:25:31.599465    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:25:31.599516    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:25:31.614106    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:25:31.614124    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:25:31.614130    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:25:31.618637    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:25:31.618645    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:25:31.632366    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:25:31.632375    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:25:31.644714    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:25:31.644725    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:25:31.656940    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:25:31.656955    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:25:31.668468    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:25:31.668479    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:25:31.686264    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.686361    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.702588    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.702691    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:31.703908    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:25:31.703918    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:25:31.729237    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:25:31.729252    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:25:31.742972    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:25:31.742982    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:25:31.760139    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:25:31.760149    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:25:31.783510    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:25:31.783518    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:25:31.823762    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:25:31.823772    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:25:31.838675    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:25:31.838685    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:25:31.850752    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:25:31.850762    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:25:31.862346    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:25:31.862360    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:25:31.877817    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:31.877828    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:25:31.877855    4522 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0722 04:25:31.877859    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.877862    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.877867    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.877870    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	  Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:31.877873    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:31.877875    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:25:41.881847    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:46.884140    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:46.887587    4522 out.go:177] 
	W0722 04:25:46.891622    4522 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0722 04:25:46.891632    4522 out.go:239] * 
	* 
	W0722 04:25:46.892323    4522 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:25:46.903465    4522 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-724000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-07-22 04:25:46.983809 -0700 PDT m=+3455.283702584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-724000 -n running-upgrade-724000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-724000 -n running-upgrade-724000: exit status 2 (15.55772825s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-724000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-708000          | force-systemd-flag-708000 | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-139000              | force-systemd-env-139000  | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-139000           | force-systemd-env-139000  | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT | 22 Jul 24 04:15 PDT |
	| start   | -p docker-flags-973000                | docker-flags-973000       | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-708000             | force-systemd-flag-708000 | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-708000          | force-systemd-flag-708000 | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT | 22 Jul 24 04:15 PDT |
	| start   | -p cert-expiration-966000             | cert-expiration-966000    | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-973000 ssh               | docker-flags-973000       | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-973000 ssh               | docker-flags-973000       | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-973000                | docker-flags-973000       | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT | 22 Jul 24 04:15 PDT |
	| start   | -p cert-options-139000                | cert-options-139000       | jenkins | v1.33.1 | 22 Jul 24 04:15 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-139000 ssh               | cert-options-139000       | jenkins | v1.33.1 | 22 Jul 24 04:16 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-139000 -- sudo        | cert-options-139000       | jenkins | v1.33.1 | 22 Jul 24 04:16 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-139000                | cert-options-139000       | jenkins | v1.33.1 | 22 Jul 24 04:16 PDT | 22 Jul 24 04:16 PDT |
	| start   | -p running-upgrade-724000             | minikube                  | jenkins | v1.26.0 | 22 Jul 24 04:16 PDT | 22 Jul 24 04:17 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-724000             | running-upgrade-724000    | jenkins | v1.33.1 | 22 Jul 24 04:17 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-966000             | cert-expiration-966000    | jenkins | v1.33.1 | 22 Jul 24 04:19 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-966000             | cert-expiration-966000    | jenkins | v1.33.1 | 22 Jul 24 04:19 PDT | 22 Jul 24 04:19 PDT |
	| start   | -p kubernetes-upgrade-682000          | kubernetes-upgrade-682000 | jenkins | v1.33.1 | 22 Jul 24 04:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-682000          | kubernetes-upgrade-682000 | jenkins | v1.33.1 | 22 Jul 24 04:19 PDT | 22 Jul 24 04:19 PDT |
	| start   | -p kubernetes-upgrade-682000          | kubernetes-upgrade-682000 | jenkins | v1.33.1 | 22 Jul 24 04:19 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-682000          | kubernetes-upgrade-682000 | jenkins | v1.33.1 | 22 Jul 24 04:19 PDT | 22 Jul 24 04:19 PDT |
	| start   | -p stopped-upgrade-239000             | minikube                  | jenkins | v1.26.0 | 22 Jul 24 04:19 PDT | 22 Jul 24 04:20 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-239000 stop           | minikube                  | jenkins | v1.26.0 | 22 Jul 24 04:20 PDT | 22 Jul 24 04:20 PDT |
	| start   | -p stopped-upgrade-239000             | stopped-upgrade-239000    | jenkins | v1.33.1 | 22 Jul 24 04:20 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 04:20:22
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 04:20:22.156403    4749 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:20:22.156560    4749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:22.156564    4749 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:22.156567    4749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:22.156711    4749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:20:22.157906    4749 out.go:298] Setting JSON to false
	I0722 04:20:22.177173    4749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4791,"bootTime":1721642431,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:20:22.177251    4749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:20:22.182980    4749 out.go:177] * [stopped-upgrade-239000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:20:22.190974    4749 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:20:22.190998    4749 notify.go:220] Checking for updates...
	I0722 04:20:22.198967    4749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:20:22.201941    4749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:20:22.205991    4749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:20:22.209021    4749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:20:22.212033    4749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:20:22.215316    4749 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:20:22.218947    4749 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0722 04:20:22.221996    4749 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:20:22.225984    4749 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:20:22.232883    4749 start.go:297] selected driver: qemu2
	I0722 04:20:22.232889    4749 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:20:22.232934    4749 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:20:22.235389    4749 cni.go:84] Creating CNI manager for ""
	I0722 04:20:22.235407    4749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:20:22.235437    4749 start.go:340] cluster config:
	{Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:20:22.235488    4749 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:20:22.242911    4749 out.go:177] * Starting "stopped-upgrade-239000" primary control-plane node in "stopped-upgrade-239000" cluster
	I0722 04:20:22.246989    4749 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0722 04:20:22.247005    4749 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0722 04:20:22.247016    4749 cache.go:56] Caching tarball of preloaded images
	I0722 04:20:22.247071    4749 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:20:22.247077    4749 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0722 04:20:22.247147    4749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/config.json ...
	I0722 04:20:22.247628    4749 start.go:360] acquireMachinesLock for stopped-upgrade-239000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:20:22.247656    4749 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "stopped-upgrade-239000"
	I0722 04:20:22.247664    4749 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:20:22.247669    4749 fix.go:54] fixHost starting: 
	I0722 04:20:22.247773    4749 fix.go:112] recreateIfNeeded on stopped-upgrade-239000: state=Stopped err=<nil>
	W0722 04:20:22.247782    4749 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:20:22.254948    4749 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-239000" ...
	I0722 04:20:22.258974    4749 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:20:22.259038    4749 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50430-:22,hostfwd=tcp::50431-:2376,hostname=stopped-upgrade-239000 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/disk.qcow2
	I0722 04:20:22.304024    4749 main.go:141] libmachine: STDOUT: 
	I0722 04:20:22.304050    4749 main.go:141] libmachine: STDERR: 
	I0722 04:20:22.304056    4749 main.go:141] libmachine: Waiting for VM to start (ssh -p 50430 docker@127.0.0.1)...
	I0722 04:20:27.741209    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:32.743397    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:32.743495    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:20:32.755756    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:20:32.755828    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:20:32.768060    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:20:32.768131    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:20:32.784123    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:20:32.784193    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:20:32.796431    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:20:32.796504    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:20:32.809051    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:20:32.809121    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:20:32.821275    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:20:32.821359    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:20:32.833050    4522 logs.go:276] 0 containers: []
	W0722 04:20:32.833062    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:20:32.833122    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:20:32.847142    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:20:32.847166    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:20:32.847172    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:20:32.891361    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:20:32.891378    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:20:32.913344    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:20:32.913354    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:20:32.927040    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:20:32.927050    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:20:32.939388    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:20:32.939400    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:20:32.952157    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:20:32.952169    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:20:32.987820    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:32.987914    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:32.988651    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:20:32.988656    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:20:33.003304    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:20:33.003314    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:20:33.014869    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:20:33.014881    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:20:33.026573    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:20:33.026585    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:20:33.038997    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:20:33.039012    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:20:33.051146    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:20:33.051161    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:20:33.066075    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:20:33.066088    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:20:33.077672    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:20:33.077684    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:20:33.101290    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:20:33.101298    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:20:33.105858    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:20:33.105865    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:20:33.120243    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:20:33.120253    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:20:33.135352    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:33.135366    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:20:33.135393    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:20:33.135399    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:33.135405    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:33.135442    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:33.135447    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:43.138992    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:42.194511    4749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/config.json ...
	I0722 04:20:42.195261    4749 machine.go:94] provisionDockerMachine start ...
	I0722 04:20:42.195483    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.195947    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.195961    4749 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 04:20:42.271873    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 04:20:42.271900    4749 buildroot.go:166] provisioning hostname "stopped-upgrade-239000"
	I0722 04:20:42.272217    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.272416    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.272433    4749 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-239000 && echo "stopped-upgrade-239000" | sudo tee /etc/hostname
	I0722 04:20:42.337458    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-239000
	
	I0722 04:20:42.337514    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.337646    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.337655    4749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-239000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-239000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-239000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 04:20:42.394331    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:20:42.394343    4749 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1127/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1127/.minikube}
	I0722 04:20:42.394355    4749 buildroot.go:174] setting up certificates
	I0722 04:20:42.394360    4749 provision.go:84] configureAuth start
	I0722 04:20:42.394366    4749 provision.go:143] copyHostCerts
	I0722 04:20:42.394439    4749 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem, removing ...
	I0722 04:20:42.394445    4749 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem
	I0722 04:20:42.394548    4749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem (1078 bytes)
	I0722 04:20:42.394753    4749 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem, removing ...
	I0722 04:20:42.394757    4749 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem
	I0722 04:20:42.394807    4749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem (1123 bytes)
	I0722 04:20:42.394921    4749 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem, removing ...
	I0722 04:20:42.394924    4749 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem
	I0722 04:20:42.394972    4749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem (1675 bytes)
	I0722 04:20:42.395062    4749 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-239000 san=[127.0.0.1 localhost minikube stopped-upgrade-239000]
	I0722 04:20:42.476373    4749 provision.go:177] copyRemoteCerts
	I0722 04:20:42.476418    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 04:20:42.476427    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:20:42.506384    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 04:20:42.513197    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 04:20:42.520655    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 04:20:42.527994    4749 provision.go:87] duration metric: took 133.624125ms to configureAuth
	I0722 04:20:42.528005    4749 buildroot.go:189] setting minikube options for container-runtime
	I0722 04:20:42.528110    4749 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:20:42.528148    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.528236    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.528241    4749 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 04:20:42.584438    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 04:20:42.584446    4749 buildroot.go:70] root file system type: tmpfs
	I0722 04:20:42.584494    4749 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 04:20:42.584536    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.584669    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.584701    4749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 04:20:42.642465    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 04:20:42.642517    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.642629    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.642637    4749 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 04:20:43.014601    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 04:20:43.014614    4749 machine.go:97] duration metric: took 819.359375ms to provisionDockerMachine
	I0722 04:20:43.014621    4749 start.go:293] postStartSetup for "stopped-upgrade-239000" (driver="qemu2")
	I0722 04:20:43.014628    4749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 04:20:43.014688    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 04:20:43.014696    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:20:43.044094    4749 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 04:20:43.045510    4749 info.go:137] Remote host: Buildroot 2021.02.12
	I0722 04:20:43.045517    4749 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/addons for local assets ...
	I0722 04:20:43.045605    4749 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/files for local assets ...
	I0722 04:20:43.045728    4749 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
	I0722 04:20:43.045861    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 04:20:43.048873    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
	I0722 04:20:43.056153    4749 start.go:296] duration metric: took 41.527584ms for postStartSetup
	I0722 04:20:43.056165    4749 fix.go:56] duration metric: took 20.808920417s for fixHost
	I0722 04:20:43.056196    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:43.056310    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:43.056314    4749 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 04:20:43.110699    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721647242.983111796
	
	I0722 04:20:43.110706    4749 fix.go:216] guest clock: 1721647242.983111796
	I0722 04:20:43.110710    4749 fix.go:229] Guest: 2024-07-22 04:20:42.983111796 -0700 PDT Remote: 2024-07-22 04:20:43.056167 -0700 PDT m=+20.928665668 (delta=-73.055204ms)
	I0722 04:20:43.110724    4749 fix.go:200] guest clock delta is within tolerance: -73.055204ms
	I0722 04:20:43.110727    4749 start.go:83] releasing machines lock for "stopped-upgrade-239000", held for 20.863491167s
	I0722 04:20:43.110782    4749 ssh_runner.go:195] Run: cat /version.json
	I0722 04:20:43.110791    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:20:43.110783    4749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 04:20:43.110821    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	W0722 04:20:43.111313    4749 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50430: connect: connection refused
	I0722 04:20:43.111336    4749 retry.go:31] will retry after 360.026592ms: dial tcp [::1]:50430: connect: connection refused
	W0722 04:20:43.527654    4749 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0722 04:20:43.527771    4749 ssh_runner.go:195] Run: systemctl --version
	I0722 04:20:43.530867    4749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 04:20:43.533745    4749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 04:20:43.533798    4749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0722 04:20:43.538337    4749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0722 04:20:43.544872    4749 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 04:20:43.544883    4749 start.go:495] detecting cgroup driver to use...
	I0722 04:20:43.544968    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:20:43.553325    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0722 04:20:43.556925    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 04:20:43.560785    4749 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 04:20:43.560819    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 04:20:43.564256    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:20:43.567514    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 04:20:43.570718    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:20:43.576093    4749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 04:20:43.580028    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 04:20:43.583337    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 04:20:43.588665    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 04:20:43.592465    4749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 04:20:43.595593    4749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 04:20:43.598324    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:43.674987    4749 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 04:20:43.680500    4749 start.go:495] detecting cgroup driver to use...
	I0722 04:20:43.680549    4749 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 04:20:43.687321    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:20:43.692803    4749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 04:20:43.699792    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:20:43.704174    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:20:43.708545    4749 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 04:20:43.760373    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:20:43.765152    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:20:43.770399    4749 ssh_runner.go:195] Run: which cri-dockerd
	I0722 04:20:43.771771    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 04:20:43.774237    4749 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 04:20:43.778927    4749 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 04:20:43.859394    4749 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 04:20:43.935863    4749 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 04:20:43.935934    4749 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 04:20:43.941183    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:44.019717    4749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:20:45.180724    4749 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161011292s)
	I0722 04:20:45.180794    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 04:20:45.185671    4749 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0722 04:20:45.191977    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:20:45.196685    4749 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 04:20:45.274732    4749 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 04:20:45.350368    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:45.425915    4749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 04:20:45.431805    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:20:45.436258    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:45.519430    4749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 04:20:45.560302    4749 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 04:20:45.560383    4749 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 04:20:45.562521    4749 start.go:563] Will wait 60s for crictl version
	I0722 04:20:45.562579    4749 ssh_runner.go:195] Run: which crictl
	I0722 04:20:45.563892    4749 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 04:20:45.578200    4749 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0722 04:20:45.578272    4749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:20:45.594327    4749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:20:45.616634    4749 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0722 04:20:45.616700    4749 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0722 04:20:45.618017    4749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 04:20:45.621466    4749 kubeadm.go:883] updating cluster {Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0722 04:20:45.621510    4749 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0722 04:20:45.621551    4749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:20:45.632225    4749 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 04:20:45.632234    4749 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0722 04:20:45.632285    4749 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 04:20:45.635691    4749 ssh_runner.go:195] Run: which lz4
	I0722 04:20:45.636993    4749 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0722 04:20:45.638231    4749 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 04:20:45.638242    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0722 04:20:46.569392    4749 docker.go:649] duration metric: took 932.444417ms to copy over tarball
	I0722 04:20:46.569449    4749 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 04:20:48.141088    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:48.141299    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:20:48.159183    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:20:48.159264    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:20:48.172199    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:20:48.172285    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:20:48.183691    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:20:48.183766    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:20:48.194757    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:20:48.194832    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:20:48.205256    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:20:48.205331    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:20:48.215620    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:20:48.215693    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:20:48.226348    4522 logs.go:276] 0 containers: []
	W0722 04:20:48.226360    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:20:48.226430    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:20:48.236489    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:20:48.236507    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:20:48.236513    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:20:48.270785    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:20:48.270798    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:20:48.290216    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:20:48.290228    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:20:48.302190    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:20:48.302205    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:20:48.313986    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:20:48.313997    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:20:48.325661    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:20:48.325672    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:20:48.337448    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:20:48.337463    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:20:48.354039    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:20:48.354049    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:20:48.377378    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:20:48.377387    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:20:48.390719    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:20:48.390731    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:20:48.404235    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:20:48.404248    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:20:48.421598    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:20:48.421610    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:20:48.434559    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:20:48.434571    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:20:48.455113    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:20:48.455127    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:20:48.489107    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:48.489203    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:48.489946    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:20:48.489952    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:20:48.494050    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:20:48.494058    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:20:48.510451    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:20:48.510461    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:20:48.521693    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:48.521704    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:20:48.521733    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:20:48.521737    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:20:48.521743    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:20:48.521751    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:48.521755    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:47.721624    4749 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152182209s)
	I0722 04:20:47.721637    4749 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 04:20:47.739369    4749 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 04:20:47.742660    4749 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0722 04:20:47.748292    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:47.832190    4749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:20:49.518186    4749 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.686005833s)
	I0722 04:20:49.518290    4749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:20:49.532248    4749 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 04:20:49.532257    4749 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0722 04:20:49.532262    4749 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 04:20:49.537726    4749 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:49.539690    4749 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:49.541401    4749 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:49.541447    4749 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:49.543463    4749 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:49.543439    4749 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:49.544820    4749 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:49.544997    4749 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:49.546372    4749 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:49.546469    4749 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:49.547550    4749 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:49.547604    4749 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0722 04:20:49.548547    4749 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:49.548561    4749 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:49.549445    4749 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0722 04:20:49.550130    4749 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.010737    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:50.023272    4749 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0722 04:20:50.023298    4749 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:50.023348    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:50.026924    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:50.032296    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:50.034512    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0722 04:20:50.035853    4749 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0722 04:20:50.035970    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:50.036517    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:50.039697    4749 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0722 04:20:50.039716    4749 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:50.039757    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:50.047476    4749 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0722 04:20:50.047501    4749 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:50.047565    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:50.047605    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0722 04:20:50.061087    4749 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0722 04:20:50.061120    4749 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:50.061186    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:50.061455    4749 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0722 04:20:50.061465    4749 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:50.061485    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:50.073080    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0722 04:20:50.075697    4749 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0722 04:20:50.075703    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0722 04:20:50.075715    4749 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0722 04:20:50.075763    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0722 04:20:50.088008    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.092484    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0722 04:20:50.092492    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0722 04:20:50.092518    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0722 04:20:50.092605    4749 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0722 04:20:50.092605    4749 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0722 04:20:50.101416    4749 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0722 04:20:50.101445    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0722 04:20:50.101452    4749 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0722 04:20:50.101463    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0722 04:20:50.101515    4749 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0722 04:20:50.101534    4749 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.101571    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.117856    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0722 04:20:50.133159    4749 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0722 04:20:50.133182    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0722 04:20:50.175465    4749 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0722 04:20:50.175487    4749 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0722 04:20:50.175493    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0722 04:20:50.211543    4749 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0722 04:20:52.561556    4749 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0722 04:20:52.561718    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:52.577409    4749 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0722 04:20:52.577438    4749 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:52.577506    4749 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:52.594066    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 04:20:52.594180    4749 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0722 04:20:52.595705    4749 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0722 04:20:52.595715    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0722 04:20:52.627538    4749 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 04:20:52.627558    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0722 04:20:52.861356    4749 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 04:20:52.861392    4749 cache_images.go:92] duration metric: took 3.329182s to LoadCachedImages
	W0722 04:20:52.861438    4749 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0722 04:20:52.861444    4749 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0722 04:20:52.861497    4749 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-239000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 04:20:52.861576    4749 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 04:20:52.875384    4749 cni.go:84] Creating CNI manager for ""
	I0722 04:20:52.875397    4749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:20:52.875401    4749 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 04:20:52.875410    4749 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-239000 NodeName:stopped-upgrade-239000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 04:20:52.875470    4749 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-239000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 04:20:52.875522    4749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0722 04:20:52.878343    4749 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 04:20:52.878375    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 04:20:52.881069    4749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0722 04:20:52.885924    4749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 04:20:52.890570    4749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0722 04:20:52.895969    4749 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0722 04:20:52.897207    4749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 04:20:52.900845    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:52.985574    4749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:20:52.991003    4749 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000 for IP: 10.0.2.15
	I0722 04:20:52.991010    4749 certs.go:194] generating shared ca certs ...
	I0722 04:20:52.991019    4749 certs.go:226] acquiring lock for ca certs: {Name:mk3f2c80d56e217629ae5cc59f1253ebc769d305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:52.991188    4749 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key
	I0722 04:20:52.991240    4749 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key
	I0722 04:20:52.991248    4749 certs.go:256] generating profile certs ...
	I0722 04:20:52.991322    4749 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.key
	I0722 04:20:52.991346    4749 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0
	I0722 04:20:52.991360    4749 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0722 04:20:53.179011    4749 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0 ...
	I0722 04:20:53.179025    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0: {Name:mk320ab3e80faa0708703cf9e34fb5fa8d76946f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:53.179784    4749 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0 ...
	I0722 04:20:53.179790    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0: {Name:mk74e40d2b818fe75dad8d11f3f613fddec42567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:53.179932    4749 certs.go:381] copying /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt
	I0722 04:20:53.180480    4749 certs.go:385] copying /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key
	I0722 04:20:53.180643    4749 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/proxy-client.key
	I0722 04:20:53.180781    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem (1338 bytes)
	W0722 04:20:53.180812    4749 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618_empty.pem, impossibly tiny 0 bytes
	I0722 04:20:53.180818    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 04:20:53.180844    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem (1078 bytes)
	I0722 04:20:53.180869    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem (1123 bytes)
	I0722 04:20:53.180892    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem (1675 bytes)
	I0722 04:20:53.180947    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem (1708 bytes)
	I0722 04:20:53.181306    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 04:20:53.188450    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 04:20:53.195829    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 04:20:53.202757    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 04:20:53.209054    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 04:20:53.216248    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 04:20:53.223501    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 04:20:53.230154    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 04:20:53.236873    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /usr/share/ca-certificates/16182.pem (1708 bytes)
	I0722 04:20:53.244129    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 04:20:53.250938    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem --> /usr/share/ca-certificates/1618.pem (1338 bytes)
	I0722 04:20:53.257361    4749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 04:20:53.262385    4749 ssh_runner.go:195] Run: openssl version
	I0722 04:20:53.264211    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 04:20:53.268105    4749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:20:53.269466    4749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:20:53.269486    4749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:20:53.271271    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 04:20:53.274193    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1618.pem && ln -fs /usr/share/ca-certificates/1618.pem /etc/ssl/certs/1618.pem"
	I0722 04:20:53.277136    4749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1618.pem
	I0722 04:20:53.278530    4749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:36 /usr/share/ca-certificates/1618.pem
	I0722 04:20:53.278555    4749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1618.pem
	I0722 04:20:53.280229    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1618.pem /etc/ssl/certs/51391683.0"
	I0722 04:20:53.283228    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16182.pem && ln -fs /usr/share/ca-certificates/16182.pem /etc/ssl/certs/16182.pem"
	I0722 04:20:53.286072    4749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16182.pem
	I0722 04:20:53.287444    4749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:36 /usr/share/ca-certificates/16182.pem
	I0722 04:20:53.287461    4749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16182.pem
	I0722 04:20:53.289165    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16182.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 04:20:53.292811    4749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 04:20:53.294206    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 04:20:53.296153    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 04:20:53.298199    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 04:20:53.300043    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 04:20:53.301716    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 04:20:53.303398    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 04:20:53.305111    4749 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:20:53.305180    4749 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:20:53.315723    4749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 04:20:53.318742    4749 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 04:20:53.318748    4749 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 04:20:53.318766    4749 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 04:20:53.321622    4749 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:20:53.321933    4749 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-239000" does not appear in /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:20:53.322030    4749 kubeconfig.go:62] /Users/jenkins/minikube-integration/19313-1127/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-239000" cluster setting kubeconfig missing "stopped-upgrade-239000" context setting]
	I0722 04:20:53.322231    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:53.322686    4749 kapi.go:59] client config for stopped-upgrade-239000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fef790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:20:53.323006    4749 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 04:20:53.325765    4749 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-239000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0722 04:20:53.325772    4749 kubeadm.go:1160] stopping kube-system containers ...
	I0722 04:20:53.325810    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:20:53.336978    4749 docker.go:483] Stopping containers: [b242274d2995 82c7409ff149 107f02380e96 cdb2f02c95ca 6d3fe4f4d288 9673cbf4cea7 d58d89cd0382 38d038729737 286c0889019f]
	I0722 04:20:53.337044    4749 ssh_runner.go:195] Run: docker stop b242274d2995 82c7409ff149 107f02380e96 cdb2f02c95ca 6d3fe4f4d288 9673cbf4cea7 d58d89cd0382 38d038729737 286c0889019f
	I0722 04:20:53.347054    4749 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 04:20:53.352605    4749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:20:53.355705    4749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 04:20:53.355710    4749 kubeadm.go:157] found existing configuration files:
	
	I0722 04:20:53.355729    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf
	I0722 04:20:53.358131    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 04:20:53.358152    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 04:20:53.360749    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf
	I0722 04:20:53.363654    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 04:20:53.363675    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 04:20:53.366321    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf
	I0722 04:20:53.368801    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 04:20:53.368822    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:20:53.371716    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf
	I0722 04:20:53.374276    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 04:20:53.374315    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:20:53.377205    4749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:20:53.380637    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.404695    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.781762    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.911001    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.938129    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.965875    4749 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:20:53.965965    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:20:54.466017    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:20:54.968050    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:20:54.972469    4749 api_server.go:72] duration metric: took 1.006612792s to wait for apiserver process to appear ...
	I0722 04:20:54.972479    4749 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:20:54.972489    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:58.525728    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:59.972752    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:59.972777    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:03.527867    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:03.527985    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:21:03.538986    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:21:03.539069    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:21:03.549926    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:21:03.549992    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:21:03.560560    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:21:03.560629    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:21:03.571325    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:21:03.571402    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:21:03.581897    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:21:03.581969    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:21:03.592522    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:21:03.592598    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:21:03.602472    4522 logs.go:276] 0 containers: []
	W0722 04:21:03.602484    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:21:03.602537    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:21:03.617143    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:21:03.617160    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:21:03.617165    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:21:03.640417    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:21:03.640428    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:21:03.675035    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:03.675127    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:03.675819    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:21:03.675824    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:21:03.686914    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:21:03.686924    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:21:03.698890    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:21:03.698899    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:21:03.716604    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:21:03.716612    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:21:03.720805    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:21:03.720812    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:21:03.732359    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:21:03.732370    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:21:03.744363    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:21:03.744374    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:21:03.756351    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:21:03.756363    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:21:03.770705    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:21:03.770716    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:21:03.782240    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:21:03.782266    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:21:03.793031    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:21:03.793044    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:21:03.805977    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:21:03.805990    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:21:03.843639    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:21:03.843650    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:21:03.857794    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:21:03.857808    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:21:03.869973    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:21:03.869983    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:21:03.883369    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:03.883379    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:21:03.883407    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:21:03.883415    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:03.883420    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:03.883424    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:03.883427    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:21:04.974405    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:04.974450    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:09.974647    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:09.974687    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:13.887434    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:14.975058    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:14.975162    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:18.889886    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:18.890175    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:21:18.912962    4522 logs.go:276] 2 containers: [dffc81da16cb 5045415bfa4b]
	I0722 04:21:18.913085    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:21:18.929422    4522 logs.go:276] 2 containers: [8f8f38b73c9c 31e229b2e880]
	I0722 04:21:18.929501    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:21:18.941635    4522 logs.go:276] 1 containers: [35e09cb53f8d]
	I0722 04:21:18.941709    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:21:18.954299    4522 logs.go:276] 2 containers: [bb2de59a46b2 d2d617658892]
	I0722 04:21:18.954374    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:21:18.969979    4522 logs.go:276] 1 containers: [92576e20db6b]
	I0722 04:21:18.970044    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:21:18.980298    4522 logs.go:276] 2 containers: [d407493c2b8e 1bdf989f8c59]
	I0722 04:21:18.980365    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:21:18.990505    4522 logs.go:276] 0 containers: []
	W0722 04:21:18.990517    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:21:18.990572    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:21:19.004850    4522 logs.go:276] 2 containers: [404815c2fffd b0f51bb80a22]
	I0722 04:21:19.004868    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:21:19.004874    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:21:19.039170    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:19.039268    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:19.040020    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:21:19.040028    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:21:19.080278    4522 logs.go:123] Gathering logs for kube-apiserver [5045415bfa4b] ...
	I0722 04:21:19.080290    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5045415bfa4b"
	I0722 04:21:19.095720    4522 logs.go:123] Gathering logs for etcd [8f8f38b73c9c] ...
	I0722 04:21:19.095731    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f8f38b73c9c"
	I0722 04:21:19.109232    4522 logs.go:123] Gathering logs for kube-scheduler [d2d617658892] ...
	I0722 04:21:19.109243    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d2d617658892"
	I0722 04:21:19.120430    4522 logs.go:123] Gathering logs for storage-provisioner [b0f51bb80a22] ...
	I0722 04:21:19.120445    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0f51bb80a22"
	I0722 04:21:19.132643    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:21:19.132655    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:21:19.157345    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:21:19.157356    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:21:19.169235    4522 logs.go:123] Gathering logs for kube-apiserver [dffc81da16cb] ...
	I0722 04:21:19.169252    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dffc81da16cb"
	I0722 04:21:19.190732    4522 logs.go:123] Gathering logs for kube-controller-manager [1bdf989f8c59] ...
	I0722 04:21:19.190744    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1bdf989f8c59"
	I0722 04:21:19.206472    4522 logs.go:123] Gathering logs for storage-provisioner [404815c2fffd] ...
	I0722 04:21:19.206483    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 404815c2fffd"
	I0722 04:21:19.217680    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:21:19.217692    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:21:19.221934    4522 logs.go:123] Gathering logs for coredns [35e09cb53f8d] ...
	I0722 04:21:19.221939    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 35e09cb53f8d"
	I0722 04:21:19.233172    4522 logs.go:123] Gathering logs for kube-scheduler [bb2de59a46b2] ...
	I0722 04:21:19.233183    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb2de59a46b2"
	I0722 04:21:19.245370    4522 logs.go:123] Gathering logs for kube-controller-manager [d407493c2b8e] ...
	I0722 04:21:19.245385    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d407493c2b8e"
	I0722 04:21:19.263060    4522 logs.go:123] Gathering logs for etcd [31e229b2e880] ...
	I0722 04:21:19.263071    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 31e229b2e880"
	I0722 04:21:19.277271    4522 logs.go:123] Gathering logs for kube-proxy [92576e20db6b] ...
	I0722 04:21:19.277282    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92576e20db6b"
	I0722 04:21:19.289457    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:19.289467    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:21:19.289495    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:21:19.289500    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:21:19.289503    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:21:19.289509    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:21:19.289511    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:21:19.975907    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:19.976026    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:24.976939    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:24.976998    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:29.293518    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:29.980058    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:29.980124    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:34.295814    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:34.295899    4522 kubeadm.go:597] duration metric: took 4m7.386838125s to restartPrimaryControlPlane
	W0722 04:21:34.295940    4522 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 04:21:34.295959    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0722 04:21:35.289024    4522 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:21:35.294110    4522 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:21:35.296981    4522 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:21:35.299933    4522 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 04:21:35.299939    4522 kubeadm.go:157] found existing configuration files:
	
	I0722 04:21:35.299964    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/admin.conf
	I0722 04:21:35.302400    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 04:21:35.302425    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 04:21:35.305004    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/kubelet.conf
	I0722 04:21:35.307885    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 04:21:35.307907    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 04:21:35.310577    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/controller-manager.conf
	I0722 04:21:35.313098    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 04:21:35.313117    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:21:35.316061    4522 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/scheduler.conf
	I0722 04:21:35.318713    4522 kubeadm.go:163] "https://control-plane.minikube.internal:50263" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50263 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 04:21:35.318735    4522 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:21:35.321139    4522 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 04:21:35.338968    4522 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0722 04:21:35.339020    4522 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 04:21:35.387507    4522 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 04:21:35.387563    4522 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 04:21:35.387633    4522 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 04:21:35.437994    4522 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 04:21:35.441424    4522 out.go:204]   - Generating certificates and keys ...
	I0722 04:21:35.441463    4522 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 04:21:35.441496    4522 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 04:21:35.441548    4522 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 04:21:35.441584    4522 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 04:21:35.441620    4522 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 04:21:35.441651    4522 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 04:21:35.441688    4522 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 04:21:35.441721    4522 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 04:21:35.441758    4522 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 04:21:35.441797    4522 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 04:21:35.441817    4522 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 04:21:35.441843    4522 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 04:21:35.620554    4522 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 04:21:35.718704    4522 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 04:21:35.761862    4522 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 04:21:35.842051    4522 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 04:21:35.872233    4522 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 04:21:35.872567    4522 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 04:21:35.872646    4522 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 04:21:35.967959    4522 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 04:21:34.980864    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:34.980898    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:35.971118    4522 out.go:204]   - Booting up control plane ...
	I0722 04:21:35.971163    4522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 04:21:35.971203    4522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 04:21:35.971265    4522 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 04:21:35.971303    4522 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 04:21:35.971502    4522 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 04:21:40.973720    4522 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002844 seconds
	I0722 04:21:40.973946    4522 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 04:21:40.982578    4522 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 04:21:41.492749    4522 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 04:21:41.492870    4522 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-724000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 04:21:41.997291    4522 kubeadm.go:310] [bootstrap-token] Using token: 3b2ac4.5cymdjmizcvjhc80
	I0722 04:21:39.982888    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:39.982942    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:42.000868    4522 out.go:204]   - Configuring RBAC rules ...
	I0722 04:21:42.000951    4522 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 04:21:42.002958    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 04:21:42.008485    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 04:21:42.009510    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 04:21:42.010365    4522 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 04:21:42.011207    4522 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 04:21:42.015694    4522 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 04:21:42.182472    4522 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 04:21:42.408773    4522 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 04:21:42.409775    4522 kubeadm.go:310] 
	I0722 04:21:42.409807    4522 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 04:21:42.409819    4522 kubeadm.go:310] 
	I0722 04:21:42.409870    4522 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 04:21:42.409873    4522 kubeadm.go:310] 
	I0722 04:21:42.409887    4522 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 04:21:42.409918    4522 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 04:21:42.409943    4522 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 04:21:42.409948    4522 kubeadm.go:310] 
	I0722 04:21:42.410049    4522 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 04:21:42.410054    4522 kubeadm.go:310] 
	I0722 04:21:42.410100    4522 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 04:21:42.410102    4522 kubeadm.go:310] 
	I0722 04:21:42.410131    4522 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 04:21:42.410174    4522 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 04:21:42.410228    4522 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 04:21:42.410235    4522 kubeadm.go:310] 
	I0722 04:21:42.410274    4522 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 04:21:42.410312    4522 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 04:21:42.410318    4522 kubeadm.go:310] 
	I0722 04:21:42.410357    4522 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3b2ac4.5cymdjmizcvjhc80 \
	I0722 04:21:42.410427    4522 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 \
	I0722 04:21:42.410444    4522 kubeadm.go:310] 	--control-plane 
	I0722 04:21:42.410447    4522 kubeadm.go:310] 
	I0722 04:21:42.410508    4522 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 04:21:42.410512    4522 kubeadm.go:310] 
	I0722 04:21:42.410553    4522 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3b2ac4.5cymdjmizcvjhc80 \
	I0722 04:21:42.410642    4522 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 
	I0722 04:21:42.410809    4522 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 04:21:42.410819    4522 cni.go:84] Creating CNI manager for ""
	I0722 04:21:42.410827    4522 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:21:42.414563    4522 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 04:21:42.421487    4522 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 04:21:42.424393    4522 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 04:21:42.429080    4522 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 04:21:42.429125    4522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 04:21:42.429160    4522 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-724000 minikube.k8s.io/updated_at=2024_07_22T04_21_42_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=running-upgrade-724000 minikube.k8s.io/primary=true
	I0722 04:21:42.474997    4522 kubeadm.go:1113] duration metric: took 45.907167ms to wait for elevateKubeSystemPrivileges
	I0722 04:21:42.475001    4522 ops.go:34] apiserver oom_adj: -16
	I0722 04:21:42.475093    4522 kubeadm.go:394] duration metric: took 4m15.580408833s to StartCluster
	I0722 04:21:42.475109    4522 settings.go:142] acquiring lock: {Name:mk640939e683dda0ffda5b348284f38e73fbc066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:21:42.475205    4522 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:21:42.475613    4522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:21:42.475828    4522 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:21:42.475836    4522 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 04:21:42.475877    4522 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-724000"
	I0722 04:21:42.475913    4522 config.go:182] Loaded profile config "running-upgrade-724000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:21:42.475928    4522 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-724000"
	I0722 04:21:42.475941    4522 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-724000"
	I0722 04:21:42.475948    4522 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-724000"
	W0722 04:21:42.475953    4522 addons.go:243] addon storage-provisioner should already be in state true
	I0722 04:21:42.475966    4522 host.go:66] Checking if "running-upgrade-724000" exists ...
	I0722 04:21:42.476196    4522 retry.go:31] will retry after 1.17937289s: connect: dial unix /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/monitor: connect: connection refused
	I0722 04:21:42.476864    4522 kapi.go:59] client config for running-upgrade-724000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/running-upgrade-724000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x102577790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:21:42.476978    4522 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-724000"
	W0722 04:21:42.476983    4522 addons.go:243] addon default-storageclass should already be in state true
	I0722 04:21:42.476990    4522 host.go:66] Checking if "running-upgrade-724000" exists ...
	I0722 04:21:42.477509    4522 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 04:21:42.477515    4522 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 04:21:42.477521    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	I0722 04:21:42.479428    4522 out.go:177] * Verifying Kubernetes components...
	I0722 04:21:42.486380    4522 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:21:42.579576    4522 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:21:42.584442    4522 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:21:42.584485    4522 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:21:42.589146    4522 api_server.go:72] duration metric: took 113.308083ms to wait for apiserver process to appear ...
	I0722 04:21:42.589154    4522 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:21:42.589161    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:42.637036    4522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 04:21:43.662465    4522 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:21:43.666465    4522 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:21:43.666472    4522 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 04:21:43.666481    4522 sshutil.go:53] new ssh client: &{IP:localhost Port:50231 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/running-upgrade-724000/id_rsa Username:docker}
	I0722 04:21:43.706180    4522 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:21:44.983362    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:44.983417    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:47.591188    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:47.591231    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:49.985704    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:49.985779    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:52.591445    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:52.591510    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:54.987962    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:54.988063    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:21:55.006929    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:21:55.007002    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:21:55.017157    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:21:55.017215    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:21:55.028406    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:21:55.028468    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:21:55.038984    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:21:55.039060    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:21:55.049164    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:21:55.049227    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:21:55.059826    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:21:55.059887    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:21:55.069868    4749 logs.go:276] 0 containers: []
	W0722 04:21:55.069880    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:21:55.069937    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:21:55.081751    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:21:55.081773    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:21:55.081780    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:21:55.103153    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:21:55.103163    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:21:55.125929    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:21:55.125946    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:21:55.214659    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:21:55.214673    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:21:55.229255    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:21:55.229270    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:21:55.270482    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:21:55.270493    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:21:55.303038    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:21:55.303050    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:21:55.314917    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:21:55.314929    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:21:55.340510    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:21:55.340525    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:21:55.352274    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:21:55.352285    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:21:55.376977    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:21:55.376991    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:21:55.389843    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:21:55.389856    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:21:55.401983    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:21:55.401995    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:21:55.414616    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:21:55.414627    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:21:55.429679    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:21:55.429693    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:21:55.441337    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:21:55.441349    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:21:55.446184    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:21:55.446192    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:21:57.591783    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:57.591823    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:57.964367    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:02.592245    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:02.592298    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:02.966640    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:02.967074    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:03.005043    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:03.005182    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:03.031272    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:03.031366    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:03.044441    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:03.044519    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:03.064669    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:03.064742    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:03.075517    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:03.075582    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:03.086347    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:03.086418    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:03.105588    4749 logs.go:276] 0 containers: []
	W0722 04:22:03.105599    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:03.105652    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:03.116627    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:03.116644    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:03.116650    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:03.130399    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:03.130410    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:03.157338    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:03.157347    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:03.171051    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:03.171061    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:03.182924    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:03.182935    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:03.200186    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:03.200201    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:03.212263    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:03.212273    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:03.223805    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:03.223817    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:03.235601    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:03.235611    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:03.239976    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:03.239984    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:03.251397    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:03.251408    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:03.274138    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:03.274149    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:03.288326    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:03.288337    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:03.303113    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:03.303123    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:03.317699    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:03.317734    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:03.355223    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:03.355235    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:03.390283    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:03.390294    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:05.917777    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:07.593164    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:07.593218    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:10.919965    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:10.920155    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:10.938805    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:10.938896    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:10.953198    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:10.953270    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:10.967837    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:10.967905    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:10.978787    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:10.978862    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:10.993254    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:10.993320    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:11.004891    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:11.004963    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:11.019929    4749 logs.go:276] 0 containers: []
	W0722 04:22:11.019941    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:11.020003    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:11.035203    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:11.035221    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:11.035227    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:11.053080    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:11.053091    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:11.077066    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:11.077074    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:11.115576    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:11.115587    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:11.141002    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:11.141012    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:11.154102    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:11.154116    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:11.158378    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:11.158385    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:11.172979    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:11.172990    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:11.184748    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:11.184758    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:11.198579    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:11.198590    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:11.209981    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:11.209997    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:11.256548    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:11.256559    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:11.270332    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:11.270341    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:11.293213    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:11.293229    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:11.304677    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:11.304691    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:11.316937    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:11.316951    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:11.331682    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:11.331698    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:12.594014    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:12.594060    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0722 04:22:12.951679    4522 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0722 04:22:12.955358    4522 out.go:177] * Enabled addons: storage-provisioner
	I0722 04:22:12.962262    4522 addons.go:510] duration metric: took 30.48692575s for enable addons: enabled=[storage-provisioner]
	I0722 04:22:13.844791    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:17.595222    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:17.595268    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:18.847086    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:18.847242    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:18.859723    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:18.859810    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:18.871433    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:18.871512    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:18.881592    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:18.881666    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:18.892057    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:18.892130    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:18.904756    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:18.904826    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:18.915072    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:18.915139    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:18.925450    4749 logs.go:276] 0 containers: []
	W0722 04:22:18.925461    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:18.925519    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:18.935473    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:18.935491    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:18.935497    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:18.939707    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:18.939716    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:18.954161    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:18.954172    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:18.967221    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:18.967232    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:18.979597    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:18.979608    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:18.993805    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:18.993815    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:19.005087    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:19.005097    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:19.029995    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:19.030006    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:19.044088    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:19.044097    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:19.065508    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:19.065519    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:19.078029    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:19.078040    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:19.092611    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:19.092624    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:19.109311    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:19.109324    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:19.120151    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:19.120162    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:19.145021    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:19.145029    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:19.184590    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:19.184601    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:19.219802    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:19.219816    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:21.736488    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:22.596792    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:22.596822    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:26.738696    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:26.738907    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:26.753605    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:26.753675    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:26.768180    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:26.768254    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:26.779317    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:26.779383    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:26.793586    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:26.793651    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:26.810845    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:26.810963    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:26.823398    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:26.823460    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:26.833600    4749 logs.go:276] 0 containers: []
	W0722 04:22:26.833613    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:26.833665    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:26.844223    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:26.844239    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:26.844245    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:26.870155    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:26.870163    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:26.881730    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:26.881746    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:26.917317    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:26.917332    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:26.931863    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:26.931872    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:26.944011    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:26.944022    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:26.965651    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:26.965663    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:26.983587    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:26.983596    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:26.995400    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:26.995409    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:26.999744    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:26.999751    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:27.013184    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:27.013193    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:27.024760    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:27.024771    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:27.063082    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:27.063090    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:27.091397    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:27.091405    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:27.105133    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:27.105147    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:27.118000    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:27.118010    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:27.130215    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:27.130229    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:27.598580    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:27.598614    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:29.646505    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:32.600685    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:32.600708    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:34.648715    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:34.648826    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:34.659877    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:34.659953    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:34.670336    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:34.670403    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:34.680886    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:34.680957    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:34.692216    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:34.692284    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:34.702737    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:34.702813    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:34.713077    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:34.713140    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:34.729328    4749 logs.go:276] 0 containers: []
	W0722 04:22:34.729343    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:34.729401    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:34.740508    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:34.740527    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:34.740533    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:34.751859    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:34.751872    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:34.764859    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:34.764869    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:34.790770    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:34.790778    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:34.802908    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:34.802917    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:34.816997    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:34.817008    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:34.821147    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:34.821153    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:34.854562    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:34.854573    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:34.866201    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:34.866212    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:34.905008    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:34.905021    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:34.919751    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:34.919762    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:34.941623    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:34.941635    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:34.952646    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:34.952657    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:34.977814    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:34.977829    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:35.000342    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:35.000354    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:35.014594    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:35.014603    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:35.025975    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:35.025988    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:37.602831    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:37.602865    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:37.541822    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:42.605029    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:42.605127    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:42.623016    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:22:42.623072    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:42.635319    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:22:42.635404    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:42.647534    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:22:42.647609    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:42.659342    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:22:42.659421    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:42.670597    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:22:42.670665    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:42.682022    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:22:42.682101    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:42.692446    4522 logs.go:276] 0 containers: []
	W0722 04:22:42.692457    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:42.692518    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:42.703893    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:22:42.703911    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:22:42.703917    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:22:42.716458    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:42.716470    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:42.742740    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:22:42.742757    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:42.758219    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:42.758231    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:42.763223    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:22:42.763231    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:22:42.779713    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:22:42.779723    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:22:42.794972    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:22:42.794983    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:22:42.809813    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:22:42.809826    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:22:42.823133    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:22:42.823145    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:22:42.835491    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:22:42.835503    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:22:42.853921    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:22:42.853933    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:22:42.867462    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:42.867473    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:22:42.886667    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.886761    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.902945    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.903040    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:42.904255    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:42.904263    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:42.941922    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:42.941933    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:22:42.941960    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:22:42.941966    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.941979    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.941983    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:42.941997    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:42.942001    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:42.942003    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:22:42.544118    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:42.544467    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:42.574531    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:42.574617    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:42.585111    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:42.585171    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:42.596463    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:42.596534    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:42.609427    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:42.609505    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:42.621155    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:42.621227    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:42.641440    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:42.641515    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:42.652161    4749 logs.go:276] 0 containers: []
	W0722 04:22:42.652174    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:42.652233    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:42.663886    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:42.663908    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:42.663914    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:42.682234    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:42.682243    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:42.701188    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:42.701201    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:42.705751    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:42.705765    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:42.744667    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:42.744676    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:42.763226    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:42.763233    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:42.775770    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:42.775781    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:42.789366    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:42.789379    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:42.803187    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:42.803200    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:42.844992    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:42.845015    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:42.861031    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:42.861044    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:42.884806    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:42.884818    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:42.901499    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:42.901509    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:42.926097    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:42.926110    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:42.940565    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:42.940576    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:42.966709    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:42.966721    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:42.977765    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:42.977777    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:45.491198    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:50.493721    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:50.493931    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:50.511841    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:50.511930    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:50.525206    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:50.525276    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:50.537557    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:50.537626    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:50.548019    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:50.548091    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:50.558870    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:50.558938    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:50.569315    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:50.569385    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:50.584148    4749 logs.go:276] 0 containers: []
	W0722 04:22:50.584164    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:50.584223    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:50.594845    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:50.594864    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:50.594870    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:50.607633    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:50.607646    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:50.622922    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:50.622934    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:50.661285    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:50.661296    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:50.680552    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:50.680566    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:50.695171    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:50.695186    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:50.706586    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:50.706598    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:50.720387    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:50.720398    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:50.745340    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:50.745352    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:50.766673    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:50.766684    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:50.779375    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:50.779389    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:50.790658    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:50.790672    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:50.808898    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:50.808912    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:50.833539    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:50.833552    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:50.844961    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:50.844977    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:50.848985    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:50.848991    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:50.883488    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:50.883503    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:52.945726    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:53.406614    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:57.947911    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:57.948004    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:57.959418    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:22:57.959484    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:57.970685    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:22:57.970761    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:57.981307    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:22:57.981379    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:57.991739    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:22:57.991804    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:58.002344    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:22:58.002418    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:58.012506    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:22:58.012583    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:58.022497    4522 logs.go:276] 0 containers: []
	W0722 04:22:58.022508    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:58.022565    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:58.032188    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:22:58.032201    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:22:58.032206    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:22:58.046296    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:22:58.046311    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:22:58.058317    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:22:58.058330    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:22:58.076796    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:22:58.076809    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:22:58.087903    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:58.087916    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:58.113052    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:58.113060    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:22:58.130875    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.130969    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.146669    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.146761    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:58.147937    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:58.147941    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:58.183918    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:22:58.183931    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:22:58.202283    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:22:58.202294    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:22:58.214076    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:22:58.214087    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:22:58.229242    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:22:58.229256    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:22:58.244129    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:22:58.244143    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:58.255737    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:58.255751    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:58.260483    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:58.260492    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:22:58.260516    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:22:58.260521    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.260524    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.260529    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:22:58.260533    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:22:58.260547    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:22:58.260550    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:22:58.409138    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:58.409317    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:58.427165    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:58.427248    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:58.440194    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:58.440261    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:58.454766    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:58.454836    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:58.465533    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:58.465603    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:58.476376    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:58.476444    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:58.487331    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:58.487402    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:58.497457    4749 logs.go:276] 0 containers: []
	W0722 04:22:58.497468    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:58.497523    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:58.508271    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:58.508288    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:58.508293    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:58.521866    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:58.521876    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:58.535686    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:58.535696    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:58.547578    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:58.547588    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:58.585020    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:58.585031    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:58.619242    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:58.619254    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:58.631222    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:58.631234    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:58.643949    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:58.643960    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:58.656134    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:58.656145    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:58.673556    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:58.673570    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:58.685070    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:58.685081    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:58.708077    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:58.708085    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:58.712628    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:58.712637    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:58.737329    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:58.737340    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:58.764318    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:58.764329    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:58.778701    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:58.778716    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:58.789873    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:58.789884    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:01.305092    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:06.307427    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:06.307587    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:06.323492    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:06.323574    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:06.336276    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:06.336344    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:06.347816    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:06.347879    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:06.358466    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:06.358541    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:06.368891    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:06.368963    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:06.379475    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:06.379542    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:06.390098    4749 logs.go:276] 0 containers: []
	W0722 04:23:06.390110    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:06.390172    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:06.400350    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:06.400373    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:06.400378    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:06.413751    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:06.413761    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:06.428032    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:06.428044    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:06.440871    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:06.440882    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:06.462156    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:06.462171    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:06.473470    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:06.473483    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:06.486379    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:06.486391    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:06.504053    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:06.504065    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:06.517697    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:06.517708    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:06.556444    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:06.556453    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:06.560725    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:06.560733    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:06.575818    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:06.575831    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:06.600665    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:06.600674    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:06.635076    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:06.635088    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:06.661240    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:06.661250    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:06.672372    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:06.672383    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:06.683472    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:06.683483    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:08.264523    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:09.199557    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:13.266820    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:13.267228    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:13.306393    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:13.306559    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:13.328794    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:13.328881    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:13.346801    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:23:13.346870    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:13.359554    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:13.359630    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:13.369903    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:13.369969    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:13.380167    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:13.380242    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:13.390730    4522 logs.go:276] 0 containers: []
	W0722 04:23:13.390744    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:13.390800    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:13.401632    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:13.401649    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:13.401655    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:13.412918    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:13.412929    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:13.430985    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:13.430999    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:13.456001    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:13.456009    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:13.467470    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:13.467480    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:13.473591    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:13.473600    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:13.490446    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:13.490457    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:13.503999    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:13.504012    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:13.520128    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:13.520140    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:13.538276    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:13.538287    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:13.555629    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.555723    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.571065    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.571157    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:13.572290    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:13.572294    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:13.607920    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:13.607931    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:13.620667    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:13.620677    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:13.632165    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:13.632176    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:13.632211    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:23:13.632217    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.632220    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.632225    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:13.632326    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:13.632334    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:13.632338    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:23:14.201792    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:14.201923    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:14.217469    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:14.217551    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:14.229377    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:14.229444    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:14.242617    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:14.242677    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:14.253063    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:14.253128    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:14.267906    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:14.267966    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:14.278439    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:14.278502    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:14.290643    4749 logs.go:276] 0 containers: []
	W0722 04:23:14.290656    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:14.290712    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:14.301725    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:14.301744    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:14.301750    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:14.315202    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:14.315212    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:14.339589    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:14.339601    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:14.352639    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:14.352652    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:14.369904    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:14.369914    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:14.386198    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:14.386212    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:14.397408    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:14.397420    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:14.409280    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:14.409291    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:14.448426    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:14.448435    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:14.452636    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:14.452643    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:14.463567    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:14.463580    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:14.488698    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:14.488713    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:14.522827    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:14.522841    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:14.536563    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:14.536574    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:14.550950    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:14.550961    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:14.563785    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:14.563795    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:14.588361    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:14.588372    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:17.102298    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:22.104496    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:22.104672    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:22.123883    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:22.123971    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:22.136626    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:22.136696    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:22.147862    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:22.147933    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:23.636395    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:22.158377    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:22.158444    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:22.168713    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:22.168817    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:22.179565    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:22.179641    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:22.194576    4749 logs.go:276] 0 containers: []
	W0722 04:23:22.194588    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:22.194647    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:22.205442    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:22.205458    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:22.205463    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:22.229156    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:22.229167    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:22.242801    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:22.242811    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:22.263929    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:22.263939    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:22.279327    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:22.279340    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:22.290694    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:22.290705    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:22.295160    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:22.295168    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:22.309131    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:22.309144    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:22.323568    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:22.323579    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:22.335487    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:22.335497    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:22.346898    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:22.346912    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:22.371861    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:22.371868    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:22.408248    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:22.408263    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:22.423205    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:22.423224    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:22.435666    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:22.435679    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:22.476150    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:22.476162    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:22.494957    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:22.494968    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:25.009387    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:28.638968    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:28.639208    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:28.665554    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:28.665641    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:28.677579    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:28.677651    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:28.688231    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:23:28.688298    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:28.698465    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:28.698535    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:28.710578    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:28.710643    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:28.725256    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:28.725323    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:28.737468    4522 logs.go:276] 0 containers: []
	W0722 04:23:28.737480    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:28.737539    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:28.748174    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:28.748188    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:28.748194    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:28.752883    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:28.752894    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:28.788926    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:28.788937    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:28.800132    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:28.800143    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:28.815985    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:28.815996    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:28.827455    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:28.827466    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:28.844391    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.844485    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.859910    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.860002    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:28.861217    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:28.861225    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:28.877404    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:28.877421    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:28.895329    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:28.895345    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:28.908292    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:28.908303    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:28.920290    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:28.920303    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:28.939016    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:28.939032    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:28.951038    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:28.951051    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:28.975698    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:28.975708    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:28.975737    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:23:28.975741    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.975745    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.975750    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:28.975769    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:28.975782    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:28.975786    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:23:30.011584    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:30.011704    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:30.031397    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:30.031481    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:30.045556    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:30.045627    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:30.058509    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:30.058576    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:30.068803    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:30.068875    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:30.079594    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:30.079657    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:30.090980    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:30.091051    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:30.101165    4749 logs.go:276] 0 containers: []
	W0722 04:23:30.101174    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:30.101226    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:30.111745    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:30.111766    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:30.111773    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:30.129219    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:30.129232    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:30.141673    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:30.141684    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:30.181125    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:30.181136    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:30.217914    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:30.217926    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:30.229055    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:30.229067    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:30.233189    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:30.233196    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:30.247511    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:30.247523    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:30.262241    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:30.262253    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:30.273727    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:30.273738    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:30.286854    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:30.286865    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:30.298260    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:30.298271    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:30.318022    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:30.318035    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:30.329010    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:30.329019    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:30.342546    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:30.342557    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:30.367173    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:30.367185    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:30.391160    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:30.391170    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:32.916133    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:38.977868    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:37.918360    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:37.918539    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:37.930783    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:37.930876    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:37.943934    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:37.944014    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:37.954429    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:37.954496    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:37.965427    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:37.965493    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:37.976093    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:37.976161    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:37.986827    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:37.986893    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:38.010467    4749 logs.go:276] 0 containers: []
	W0722 04:23:38.010483    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:38.010538    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:38.022198    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:38.022215    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:38.022220    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:38.047179    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:38.047187    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:38.051616    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:38.051624    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:38.066131    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:38.066141    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:38.077466    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:38.077476    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:38.098886    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:38.098897    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:38.112904    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:38.112914    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:38.124652    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:38.124662    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:38.136494    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:38.136507    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:38.162075    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:38.162086    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:38.178456    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:38.178466    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:38.191288    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:38.191299    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:38.231457    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:38.231466    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:38.252476    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:38.252486    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:38.269483    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:38.269494    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:38.305207    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:38.305218    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:38.316768    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:38.316779    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:40.835132    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:43.980170    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:43.980331    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:43.997367    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:43.997446    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:44.010065    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:44.010139    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:44.022061    4522 logs.go:276] 2 containers: [cc88e2e59cc9 f695590f14ba]
	I0722 04:23:44.022132    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:44.032901    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:44.032969    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:44.043549    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:44.043622    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:44.053770    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:44.053839    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:44.063925    4522 logs.go:276] 0 containers: []
	W0722 04:23:44.063937    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:44.063998    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:44.074779    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:44.074797    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:44.074803    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:44.091715    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:44.091725    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:44.105670    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:44.105680    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:44.126908    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:44.126922    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:44.139075    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:44.139085    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:44.156616    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:44.156625    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:44.168505    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:44.168515    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:44.186472    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.186567    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.201809    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.201901    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:44.203074    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:44.203079    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:44.237899    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:44.237907    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:44.261532    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:44.261540    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:44.272864    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:44.272876    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:44.287191    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:44.287202    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:44.291935    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:44.291944    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:44.303279    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:44.303288    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:44.303313    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:23:44.303318    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.303322    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.303326    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:44.303330    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:44.303334    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:44.303336    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:23:45.837424    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:45.837596    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:45.854206    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:45.854298    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:45.866978    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:45.867052    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:45.878019    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:45.878092    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:45.888506    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:45.888570    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:45.898750    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:45.898820    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:45.909865    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:45.909932    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:45.923841    4749 logs.go:276] 0 containers: []
	W0722 04:23:45.923852    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:45.923903    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:45.934578    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:45.934596    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:45.934601    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:45.973048    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:45.973058    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:45.977066    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:45.977075    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:45.991492    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:45.991505    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:46.009337    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:46.009347    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:46.020938    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:46.020948    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:46.032301    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:46.032312    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:46.068598    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:46.068609    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:46.094057    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:46.094073    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:46.112273    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:46.112285    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:46.133613    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:46.133625    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:46.146051    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:46.146062    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:46.164458    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:46.164468    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:46.178143    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:46.178154    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:46.189730    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:46.189743    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:46.202641    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:46.202651    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:46.227513    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:46.227521    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:48.741174    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:54.307387    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:53.742141    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:53.742289    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:53.753308    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:53.753382    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:53.763640    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:53.763713    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:53.773660    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:53.773737    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:53.785485    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:53.785563    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:53.796325    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:53.796392    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:53.809250    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:53.809320    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:53.819398    4749 logs.go:276] 0 containers: []
	W0722 04:23:53.819408    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:53.819462    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:53.830026    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:53.830048    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:53.830054    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:53.834313    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:53.834323    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:53.848366    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:53.848379    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:53.868929    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:53.868943    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:53.887706    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:53.887717    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:53.898959    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:53.898974    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:53.921321    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:53.921331    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:53.935220    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:53.935230    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:53.947581    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:53.947591    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:53.963753    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:53.963763    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:53.975571    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:53.975581    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:54.015723    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:54.015739    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:54.055399    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:54.055412    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:54.084010    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:54.084031    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:54.105387    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:54.105398    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:54.119634    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:54.119646    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:54.132720    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:54.132732    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:56.652981    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:59.309573    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:59.309797    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:59.327374    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:23:59.327465    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:59.340409    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:23:59.340486    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:59.352105    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:23:59.352177    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:59.362560    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:23:59.362627    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:59.372972    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:23:59.373049    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:59.384019    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:23:59.384089    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:59.394526    4522 logs.go:276] 0 containers: []
	W0722 04:23:59.394540    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:59.394600    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:59.404912    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:23:59.404931    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:23:59.404937    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:23:59.418861    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:23:59.418872    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:23:59.432899    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:23:59.432909    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:23:59.446502    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:23:59.446514    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:23:59.457894    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:23:59.457904    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:23:59.472933    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:23:59.472945    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:59.484474    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:59.484487    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:23:59.500028    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.500120    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.515591    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.515682    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:59.516891    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:59.516896    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:59.551993    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:23:59.552002    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:23:59.563462    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:23:59.563472    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:23:59.575480    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:23:59.575489    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:23:59.592757    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:59.592766    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:59.617898    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:59.617906    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:59.622672    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:23:59.622681    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:23:59.638252    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:23:59.638263    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:23:59.650151    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:59.650162    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:23:59.650189    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:23:59.650194    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.650213    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.650223    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:23:59.650229    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:23:59.650232    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:23:59.650235    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:01.655264    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:01.655513    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:01.672838    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:01.672925    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:01.686308    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:01.686391    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:01.697462    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:01.697526    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:01.709732    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:01.709806    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:01.722232    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:01.722300    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:01.732928    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:01.732994    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:01.742980    4749 logs.go:276] 0 containers: []
	W0722 04:24:01.742992    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:01.743042    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:01.753642    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:01.753659    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:01.753665    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:01.788023    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:01.788034    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:01.801795    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:01.801805    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:01.817544    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:01.817555    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:01.840135    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:01.840145    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:01.857286    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:01.857299    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:01.870701    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:01.870714    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:01.881788    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:01.881803    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:01.893272    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:01.893283    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:01.917795    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:01.917804    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:01.929379    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:01.929390    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:01.952334    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:01.952342    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:01.956456    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:01.956463    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:01.970553    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:01.970564    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:02.007738    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:02.007750    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:02.024688    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:02.024698    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:02.036106    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:02.036118    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:04.549672    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:09.652949    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:09.552034    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:09.552238    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:09.568753    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:09.568831    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:09.581458    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:09.581526    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:09.592685    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:09.592751    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:09.603102    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:09.603160    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:09.613083    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:09.613147    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:09.623736    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:09.623799    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:09.633883    4749 logs.go:276] 0 containers: []
	W0722 04:24:09.633897    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:09.633951    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:09.644648    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:09.644664    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:09.644670    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:09.667847    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:09.667857    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:09.684634    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:09.684647    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:09.702004    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:09.702014    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:09.741632    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:09.741641    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:09.746048    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:09.746054    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:09.758708    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:09.758723    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:09.770459    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:09.770469    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:09.795102    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:09.795113    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:09.813255    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:09.813265    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:09.824907    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:09.824917    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:09.845607    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:09.845618    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:09.857802    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:09.857812    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:09.871750    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:09.871759    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:09.882745    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:09.882755    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:09.894560    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:09.894571    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:09.928823    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:09.928834    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:14.655106    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:14.655321    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:14.676654    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:24:14.676750    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:14.690153    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:24:14.690230    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:14.702100    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:24:14.702166    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:14.712704    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:24:14.712766    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:14.723076    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:24:14.723149    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:14.734184    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:24:14.734260    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:14.744061    4522 logs.go:276] 0 containers: []
	W0722 04:24:14.744071    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:14.744122    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:14.754928    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:24:14.754946    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:24:14.754952    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:24:14.769122    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:24:14.769136    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:24:14.786965    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:24:14.786977    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:24:14.801457    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:24:14.801468    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:24:14.814581    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:24:14.814593    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:24:14.826014    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:24:14.826026    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:24:14.837707    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:24:14.837720    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:14.849660    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:14.849673    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:14.854472    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:24:14.854481    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:24:14.866281    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:24:14.866293    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:24:14.885728    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:14.885738    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:14.910900    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:14.910908    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:24:14.928691    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:14.928783    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:14.945010    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:14.945103    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:14.946321    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:14.946330    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:14.979521    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:24:14.979534    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:24:14.991521    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:24:14.991533    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:24:15.002909    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:15.002919    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:24:15.002947    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:24:15.002952    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:15.002960    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:15.002965    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:15.002969    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:15.002971    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:15.003012    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:12.449114    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:17.451293    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:17.451483    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:17.464179    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:17.464250    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:17.475103    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:17.475177    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:17.485688    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:17.485753    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:17.496441    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:17.496513    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:17.507511    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:17.507574    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:17.518653    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:17.518722    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:17.528830    4749 logs.go:276] 0 containers: []
	W0722 04:24:17.528840    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:17.528896    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:17.539518    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:17.539535    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:17.539540    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:17.567679    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:17.567691    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:17.579121    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:17.579130    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:17.590563    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:17.590574    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:17.630113    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:17.630120    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:17.665839    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:17.665850    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:17.678741    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:17.678751    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:17.700547    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:17.700561    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:17.714892    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:17.714903    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:17.726901    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:17.726912    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:17.741398    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:17.741408    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:17.755159    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:17.755170    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:17.759374    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:17.759380    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:17.774154    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:17.774166    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:17.791875    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:17.791886    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:17.804192    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:17.804205    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:17.826420    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:17.826429    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:20.340133    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:25.007007    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:25.342269    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:25.342381    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:25.357838    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:25.357913    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:25.376291    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:25.376359    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:25.387244    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:25.387318    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:25.398203    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:25.398268    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:25.408560    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:25.408626    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:25.419244    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:25.419313    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:25.428856    4749 logs.go:276] 0 containers: []
	W0722 04:24:25.428875    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:25.428925    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:25.439464    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:25.439482    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:25.439487    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:25.452639    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:25.452651    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:25.464252    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:25.464263    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:25.476238    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:25.476254    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:25.480459    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:25.480467    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:25.494683    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:25.494695    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:25.521086    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:25.521096    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:25.532932    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:25.532943    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:25.544221    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:25.544232    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:25.582804    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:25.582813    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:25.597696    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:25.597706    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:25.608426    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:25.608439    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:25.622104    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:25.622115    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:25.657861    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:25.657878    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:25.685981    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:25.685992    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:25.700298    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:25.700308    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:25.722927    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:25.722938    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:30.008525    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:30.008730    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:30.024954    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:24:30.025033    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:30.038343    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:24:30.038413    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:30.051643    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:24:30.051712    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:30.071466    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:24:30.071526    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:30.081889    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:24:30.081953    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:30.092565    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:24:30.092627    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:30.102699    4522 logs.go:276] 0 containers: []
	W0722 04:24:30.102713    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:30.102770    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:30.112950    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:24:30.112969    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:30.112973    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:30.138377    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:24:30.138387    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:24:30.150214    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:30.150226    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:30.184271    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:24:30.184282    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:24:30.195539    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:24:30.195553    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:24:30.207541    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:24:30.207555    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:30.218929    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:30.218943    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:30.223300    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:24:30.223310    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:24:30.241955    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:24:30.241968    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:24:30.259184    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:24:30.259196    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:24:30.271574    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:24:30.271585    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:24:30.292410    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:24:30.292420    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:24:30.304258    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:30.304268    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:24:30.322846    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.322950    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.338953    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.339051    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:30.340271    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:24:30.340281    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:24:30.364179    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:24:30.364191    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:24:30.385664    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:30.385675    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:24:30.385703    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:24:30.385710    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.385758    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.385770    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:30.385779    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:30.385800    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:30.385813    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:28.249031    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:33.251332    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:33.251606    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:33.279846    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:33.279980    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:33.297984    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:33.298064    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:33.320108    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:33.320178    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:33.331058    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:33.331125    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:33.341721    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:33.341786    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:33.356099    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:33.356163    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:33.367357    4749 logs.go:276] 0 containers: []
	W0722 04:24:33.367370    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:33.367437    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:33.379102    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:33.379119    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:33.379125    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:33.391627    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:33.391643    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:33.402904    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:33.402915    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:33.426703    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:33.426710    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:33.451880    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:33.451891    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:33.471146    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:33.471158    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:33.483283    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:33.483297    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:33.521703    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:33.521711    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:33.535599    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:33.535609    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:33.549316    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:33.549330    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:33.560714    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:33.560725    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:33.578645    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:33.578659    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:33.592427    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:33.592439    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:33.596812    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:33.596818    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:33.610730    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:33.610746    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:33.631495    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:33.631510    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:33.644071    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:33.644085    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:36.181111    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:40.389797    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:41.183351    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:41.183551    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:41.199456    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:41.199535    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:41.210927    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:41.210996    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:41.221250    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:41.221313    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:41.231590    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:41.231663    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:41.243324    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:41.243393    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:41.253988    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:41.254056    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:41.264120    4749 logs.go:276] 0 containers: []
	W0722 04:24:41.264132    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:41.264192    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:41.274763    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:41.274787    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:41.274793    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:41.308504    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:41.308517    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:41.333086    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:41.333096    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:41.349554    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:41.349568    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:41.378206    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:41.378219    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:41.405143    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:41.405159    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:41.425820    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:41.425835    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:41.430183    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:41.430190    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:41.449323    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:41.449334    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:41.462955    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:41.462967    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:41.475059    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:41.475071    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:41.514899    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:41.514913    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:41.532080    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:41.532091    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:41.544866    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:41.544876    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:41.559152    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:41.559163    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:41.580310    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:41.580321    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:41.604900    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:41.604911    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:45.392121    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:45.392406    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:44.118795    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:45.426403    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:24:45.426529    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:45.445700    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:24:45.445794    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:45.476238    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:24:45.476317    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:45.488059    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:24:45.488131    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:45.498806    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:24:45.498879    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:45.508847    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:24:45.508906    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:45.519477    4522 logs.go:276] 0 containers: []
	W0722 04:24:45.519489    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:45.519550    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:45.529637    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:24:45.529655    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:45.529660    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:24:45.545924    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.546021    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.561362    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.561455    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:45.562654    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:24:45.562663    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:24:45.574557    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:45.574567    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:45.579147    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:24:45.579156    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:24:45.591244    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:24:45.591258    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:24:45.602740    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:24:45.602754    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:24:45.618171    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:45.618182    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:45.643426    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:24:45.643436    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:45.656499    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:45.656509    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:45.693812    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:24:45.693828    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:24:45.709755    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:24:45.709766    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:24:45.724733    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:24:45.724745    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:24:45.736065    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:24:45.736078    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:24:45.749102    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:24:45.749116    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:24:45.761292    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:24:45.761306    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:24:45.778648    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:45.778658    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:24:45.778685    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:24:45.778689    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.778693    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.778697    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:24:45.778701    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:24:45.778704    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:24:45.778707    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:24:49.121231    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:49.121510    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:49.151311    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:49.151451    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:49.171052    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:49.171165    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:49.185718    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:49.185801    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:49.196976    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:49.197044    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:49.206877    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:49.206958    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:49.217146    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:49.217231    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:49.230496    4749 logs.go:276] 0 containers: []
	W0722 04:24:49.230511    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:49.230583    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:49.240616    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:49.240634    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:49.240640    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:49.279609    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:49.279620    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:49.293968    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:49.293978    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:49.307726    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:49.307735    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:49.322756    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:49.322767    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:49.347375    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:49.347388    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:49.360133    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:49.360144    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:49.394709    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:49.394721    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:49.405659    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:49.405673    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:49.421980    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:49.421992    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:49.447628    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:49.447640    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:49.459349    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:49.459362    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:49.463780    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:49.463786    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:49.478619    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:49.478631    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:49.503567    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:49.503577    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:49.517695    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:49.517706    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:49.540097    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:49.540108    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:52.053897    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:57.056542    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:57.056626    4749 kubeadm.go:597] duration metric: took 4m3.741945958s to restartPrimaryControlPlane
	W0722 04:24:57.056704    4749 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 04:24:57.056741    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0722 04:24:58.087990    4749 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031253s)
	I0722 04:24:58.088046    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:24:58.093112    4749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:24:58.095900    4749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:24:58.098615    4749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 04:24:58.098621    4749 kubeadm.go:157] found existing configuration files:
	
	I0722 04:24:58.098642    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf
	I0722 04:24:58.101219    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 04:24:58.101239    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 04:24:58.104133    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf
	I0722 04:24:58.107280    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 04:24:58.107305    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 04:24:58.110136    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf
	I0722 04:24:58.112672    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 04:24:58.112692    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:24:58.115769    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf
	I0722 04:24:58.118940    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 04:24:58.118963    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:24:58.121735    4749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 04:24:58.139436    4749 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0722 04:24:58.139562    4749 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 04:24:58.192818    4749 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 04:24:58.192875    4749 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 04:24:58.192950    4749 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 04:24:58.243936    4749 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 04:24:58.249147    4749 out.go:204]   - Generating certificates and keys ...
	I0722 04:24:58.249181    4749 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 04:24:58.249212    4749 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 04:24:58.249258    4749 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 04:24:58.249316    4749 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 04:24:58.249353    4749 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 04:24:58.249379    4749 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 04:24:58.249412    4749 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 04:24:58.249443    4749 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 04:24:58.249486    4749 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 04:24:58.249541    4749 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 04:24:58.249565    4749 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 04:24:58.249593    4749 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 04:24:58.334827    4749 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 04:24:58.423619    4749 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 04:24:58.489988    4749 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 04:24:58.594929    4749 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 04:24:58.624767    4749 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 04:24:58.625147    4749 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 04:24:58.625209    4749 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 04:24:58.713841    4749 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 04:24:55.782657    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:58.717546    4749 out.go:204]   - Booting up control plane ...
	I0722 04:24:58.717601    4749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 04:24:58.717683    4749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 04:24:58.717744    4749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 04:24:58.719607    4749 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 04:24:58.720390    4749 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 04:25:03.222841    4749 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502142 seconds
	I0722 04:25:03.222898    4749 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 04:25:03.226740    4749 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 04:25:03.740843    4749 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 04:25:03.741115    4749 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-239000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 04:25:04.246991    4749 kubeadm.go:310] [bootstrap-token] Using token: kimnev.ubkalfagcbm7tlf8
	I0722 04:25:04.252827    4749 out.go:204]   - Configuring RBAC rules ...
	I0722 04:25:04.252897    4749 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 04:25:04.252963    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 04:25:04.258737    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 04:25:04.259726    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 04:25:04.260534    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 04:25:04.261348    4749 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 04:25:04.264518    4749 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 04:25:04.454911    4749 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 04:25:04.651468    4749 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 04:25:04.651999    4749 kubeadm.go:310] 
	I0722 04:25:04.652029    4749 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 04:25:04.652034    4749 kubeadm.go:310] 
	I0722 04:25:04.652094    4749 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 04:25:04.652100    4749 kubeadm.go:310] 
	I0722 04:25:04.652114    4749 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 04:25:04.652140    4749 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 04:25:04.652192    4749 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 04:25:04.652197    4749 kubeadm.go:310] 
	I0722 04:25:04.652227    4749 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 04:25:04.652230    4749 kubeadm.go:310] 
	I0722 04:25:04.652256    4749 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 04:25:04.652260    4749 kubeadm.go:310] 
	I0722 04:25:04.652286    4749 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 04:25:04.652335    4749 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 04:25:04.652380    4749 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 04:25:04.652383    4749 kubeadm.go:310] 
	I0722 04:25:04.652427    4749 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 04:25:04.652468    4749 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 04:25:04.652471    4749 kubeadm.go:310] 
	I0722 04:25:04.652522    4749 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kimnev.ubkalfagcbm7tlf8 \
	I0722 04:25:04.652576    4749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 \
	I0722 04:25:04.652588    4749 kubeadm.go:310] 	--control-plane 
	I0722 04:25:04.652591    4749 kubeadm.go:310] 
	I0722 04:25:04.652668    4749 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 04:25:04.652679    4749 kubeadm.go:310] 
	I0722 04:25:04.652718    4749 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kimnev.ubkalfagcbm7tlf8 \
	I0722 04:25:04.652776    4749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 
	I0722 04:25:04.652847    4749 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 04:25:04.652857    4749 cni.go:84] Creating CNI manager for ""
	I0722 04:25:04.652867    4749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:25:04.656603    4749 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 04:25:04.664577    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 04:25:04.667824    4749 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 04:25:04.672648    4749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 04:25:04.672687    4749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 04:25:04.672697    4749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-239000 minikube.k8s.io/updated_at=2024_07_22T04_25_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=stopped-upgrade-239000 minikube.k8s.io/primary=true
	I0722 04:25:04.721269    4749 ops.go:34] apiserver oom_adj: -16
	I0722 04:25:04.721305    4749 kubeadm.go:1113] duration metric: took 48.653667ms to wait for elevateKubeSystemPrivileges
	I0722 04:25:04.721317    4749 kubeadm.go:394] duration metric: took 4m11.420412583s to StartCluster
	I0722 04:25:04.721327    4749 settings.go:142] acquiring lock: {Name:mk640939e683dda0ffda5b348284f38e73fbc066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:25:04.721417    4749 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:25:04.721843    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:25:04.722060    4749 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:25:04.722071    4749 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 04:25:04.722104    4749 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-239000"
	I0722 04:25:04.722119    4749 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-239000"
	W0722 04:25:04.722122    4749 addons.go:243] addon storage-provisioner should already be in state true
	I0722 04:25:04.722137    4749 host.go:66] Checking if "stopped-upgrade-239000" exists ...
	I0722 04:25:04.722140    4749 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-239000"
	I0722 04:25:04.722155    4749 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-239000"
	I0722 04:25:04.722156    4749 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:25:04.723243    4749 kapi.go:59] client config for stopped-upgrade-239000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fef790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:25:04.723375    4749 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-239000"
	W0722 04:25:04.723380    4749 addons.go:243] addon default-storageclass should already be in state true
	I0722 04:25:04.723390    4749 host.go:66] Checking if "stopped-upgrade-239000" exists ...
	I0722 04:25:04.726507    4749 out.go:177] * Verifying Kubernetes components...
	I0722 04:25:04.726955    4749 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 04:25:04.729725    4749 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 04:25:04.729733    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:25:04.733494    4749 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:25:00.784795    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:00.784894    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:25:00.800206    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:25:00.800275    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:25:00.811748    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:25:00.811814    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:25:00.823299    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:25:00.823377    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:25:00.834640    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:25:00.834712    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:25:00.846156    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:25:00.846229    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:25:00.857562    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:25:00.857636    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:25:00.868591    4522 logs.go:276] 0 containers: []
	W0722 04:25:00.868606    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:25:00.868663    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:25:00.883966    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:25:00.883988    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:25:00.883993    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:25:00.898624    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:25:00.898638    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:25:00.911229    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:25:00.911241    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:25:00.936041    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:25:00.936059    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:25:00.948277    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:25:00.948290    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:25:00.953201    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:25:00.953207    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:25:00.964910    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:25:00.964924    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:25:00.981692    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:25:00.981707    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:25:00.998538    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:25:00.998550    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:25:01.011717    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:25:01.011732    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:25:01.030587    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:25:01.030605    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:25:01.042907    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:25:01.042923    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:25:01.062266    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.062370    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.078419    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.078513    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:01.079729    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:25:01.079744    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:25:01.118289    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:25:01.118305    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:25:01.132621    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:25:01.132635    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:25:01.148433    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:01.148448    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:25:01.148476    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:25:01.148481    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.148487    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.148491    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:01.148494    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:01.148497    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:01.148499    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:25:04.737518    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:25:04.741536    4749 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:25:04.741542    4749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 04:25:04.741548    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:25:04.824635    4749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:25:04.830321    4749 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:25:04.830365    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:25:04.834007    4749 api_server.go:72] duration metric: took 111.937ms to wait for apiserver process to appear ...
	I0722 04:25:04.834014    4749 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:25:04.834021    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:04.842367    4749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 04:25:04.875483    4749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:25:09.836078    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:09.836114    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:11.151995    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:14.836438    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:14.836478    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:16.154282    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:16.154431    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:25:16.165082    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:25:16.165148    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:25:16.176422    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:25:16.176498    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:25:16.193740    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:25:16.193806    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:25:16.204594    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:25:16.204663    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:25:16.215102    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:25:16.215174    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:25:16.225640    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:25:16.225707    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:25:16.236215    4522 logs.go:276] 0 containers: []
	W0722 04:25:16.236228    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:25:16.236284    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:25:16.247077    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:25:16.247098    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:25:16.247103    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:25:16.265609    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.265703    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.281670    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.281762    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:16.282973    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:25:16.282978    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:25:16.300379    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:25:16.300389    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:25:16.312252    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:25:16.312263    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:25:16.328008    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:25:16.328024    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:25:16.351678    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:25:16.351688    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:25:16.363392    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:25:16.363403    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:25:16.368253    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:25:16.368262    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:25:16.382918    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:25:16.382930    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:25:16.395151    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:25:16.395163    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:25:16.407057    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:25:16.407068    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:25:16.442740    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:25:16.442754    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:25:16.457439    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:25:16.457450    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:25:16.474746    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:25:16.474757    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:25:16.486637    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:25:16.486648    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:25:16.504282    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:16.504292    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:25:16.504319    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:25:16.504323    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.504327    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.504332    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:16.504335    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:16.504369    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:16.504389    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:25:19.836749    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:19.836780    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:24.837237    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:24.837277    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:26.508375    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:29.837837    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:29.837872    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:34.838642    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:34.838679    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0722 04:25:35.196096    4749 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0722 04:25:35.200393    4749 out.go:177] * Enabled addons: storage-provisioner
	I0722 04:25:31.510636    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:31.510844    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:25:31.530741    4522 logs.go:276] 1 containers: [ff0a72834be9]
	I0722 04:25:31.530833    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:25:31.545082    4522 logs.go:276] 1 containers: [a443754c5936]
	I0722 04:25:31.545157    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:25:31.557613    4522 logs.go:276] 4 containers: [11f612391bb5 3aa1fabe8d3d cc88e2e59cc9 f695590f14ba]
	I0722 04:25:31.557684    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:25:31.567748    4522 logs.go:276] 1 containers: [19fea8cb2f86]
	I0722 04:25:31.567809    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:25:31.578737    4522 logs.go:276] 1 containers: [812f238bbb81]
	I0722 04:25:31.578794    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:25:31.589296    4522 logs.go:276] 1 containers: [e86dcf4cf2ad]
	I0722 04:25:31.589361    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:25:31.599453    4522 logs.go:276] 0 containers: []
	W0722 04:25:31.599465    4522 logs.go:278] No container was found matching "kindnet"
	I0722 04:25:31.599516    4522 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:25:31.614106    4522 logs.go:276] 1 containers: [4b4fab967404]
	I0722 04:25:31.614124    4522 logs.go:123] Gathering logs for dmesg ...
	I0722 04:25:31.614130    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:25:31.618637    4522 logs.go:123] Gathering logs for etcd [a443754c5936] ...
	I0722 04:25:31.618645    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a443754c5936"
	I0722 04:25:31.632366    4522 logs.go:123] Gathering logs for storage-provisioner [4b4fab967404] ...
	I0722 04:25:31.632375    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4b4fab967404"
	I0722 04:25:31.644714    4522 logs.go:123] Gathering logs for coredns [11f612391bb5] ...
	I0722 04:25:31.644725    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11f612391bb5"
	I0722 04:25:31.656940    4522 logs.go:123] Gathering logs for container status ...
	I0722 04:25:31.656955    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:25:31.668468    4522 logs.go:123] Gathering logs for kubelet ...
	I0722 04:25:31.668479    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0722 04:25:31.686264    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.686361    4522 logs.go:138] Found kubelet problem: Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.702588    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.702691    4522 logs.go:138] Found kubelet problem: Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:31.703908    4522 logs.go:123] Gathering logs for coredns [3aa1fabe8d3d] ...
	I0722 04:25:31.703918    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3aa1fabe8d3d"
	I0722 04:25:31.729237    4522 logs.go:123] Gathering logs for kube-proxy [812f238bbb81] ...
	I0722 04:25:31.729252    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 812f238bbb81"
	I0722 04:25:31.742972    4522 logs.go:123] Gathering logs for kube-controller-manager [e86dcf4cf2ad] ...
	I0722 04:25:31.742982    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e86dcf4cf2ad"
	I0722 04:25:31.760139    4522 logs.go:123] Gathering logs for Docker ...
	I0722 04:25:31.760149    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:25:31.783510    4522 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:25:31.783518    4522 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:25:31.823762    4522 logs.go:123] Gathering logs for kube-apiserver [ff0a72834be9] ...
	I0722 04:25:31.823772    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ff0a72834be9"
	I0722 04:25:31.838675    4522 logs.go:123] Gathering logs for coredns [cc88e2e59cc9] ...
	I0722 04:25:31.838685    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cc88e2e59cc9"
	I0722 04:25:31.850752    4522 logs.go:123] Gathering logs for coredns [f695590f14ba] ...
	I0722 04:25:31.850762    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f695590f14ba"
	I0722 04:25:31.862346    4522 logs.go:123] Gathering logs for kube-scheduler [19fea8cb2f86] ...
	I0722 04:25:31.862360    4522 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19fea8cb2f86"
	I0722 04:25:31.877817    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:31.877828    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0722 04:25:31.877855    4522 out.go:239] X Problems detected in kubelet:
	W0722 04:25:31.877859    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: W0722 11:17:46.135858    4280 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.877862    4522 out.go:239]   Jul 22 11:17:46 running-upgrade-724000 kubelet[4280]: E0722 11:17:46.135900    4280 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.877867    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	W0722 04:25:31.877870    4522 out.go:239]   Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	I0722 04:25:31.877873    4522 out.go:304] Setting ErrFile to fd 2...
	I0722 04:25:31.877875    4522 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:25:35.208336    4749 addons.go:510] duration metric: took 30.486757708s for enable addons: enabled=[storage-provisioner]
	I0722 04:25:39.839717    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:39.839770    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:41.881847    4522 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:46.884140    4522 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:46.887587    4522 out.go:177] 
	W0722 04:25:46.891622    4522 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0722 04:25:46.891632    4522 out.go:239] * 
	W0722 04:25:46.892323    4522 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:25:46.903465    4522 out.go:177] 
	I0722 04:25:44.841200    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:44.841244    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:49.842075    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:49.842136    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:54.844237    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:54.844291    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:59.846423    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:59.846472    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-07-22 11:16:38 UTC, ends at Mon 2024-07-22 11:26:02 UTC. --
	Jul 22 11:25:43 running-upgrade-724000 dockerd[3226]: time="2024-07-22T11:25:43.714895432Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/913c1ba3d0cfaabba18c2959cde5a4e468eb2b304ffb3553b077998503261da8 pid=16142 runtime=io.containerd.runc.v2
	Jul 22 11:25:43 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:43Z" level=error msg="ContainerStats resp: {0x4000852c40 linux}"
	Jul 22 11:25:43 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:43Z" level=error msg="ContainerStats resp: {0x40008fd400 linux}"
	Jul 22 11:25:44 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:44Z" level=error msg="ContainerStats resp: {0x4000863f00 linux}"
	Jul 22 11:25:45 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:45Z" level=error msg="ContainerStats resp: {0x40004adb00 linux}"
	Jul 22 11:25:45 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:45Z" level=error msg="ContainerStats resp: {0x40004adf40 linux}"
	Jul 22 11:25:45 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:45Z" level=error msg="ContainerStats resp: {0x40005c1480 linux}"
	Jul 22 11:25:45 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:45Z" level=error msg="ContainerStats resp: {0x40005c1840 linux}"
	Jul 22 11:25:45 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:45Z" level=error msg="ContainerStats resp: {0x400090ce40 linux}"
	Jul 22 11:25:45 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:45Z" level=error msg="ContainerStats resp: {0x400090d440 linux}"
	Jul 22 11:25:45 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:45Z" level=error msg="ContainerStats resp: {0x4000826400 linux}"
	Jul 22 11:25:46 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:46Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 22 11:25:51 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 22 11:25:55 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:55Z" level=error msg="ContainerStats resp: {0x400090dfc0 linux}"
	Jul 22 11:25:55 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:55Z" level=error msg="ContainerStats resp: {0x40005c1000 linux}"
	Jul 22 11:25:56 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Jul 22 11:25:56 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:56Z" level=error msg="ContainerStats resp: {0x40004ad940 linux}"
	Jul 22 11:25:57 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:57Z" level=error msg="ContainerStats resp: {0x4000530540 linux}"
	Jul 22 11:25:57 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:57Z" level=error msg="ContainerStats resp: {0x40005c1fc0 linux}"
	Jul 22 11:25:57 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:57Z" level=error msg="ContainerStats resp: {0x4000862540 linux}"
	Jul 22 11:25:57 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:57Z" level=error msg="ContainerStats resp: {0x4000862680 linux}"
	Jul 22 11:25:57 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:57Z" level=error msg="ContainerStats resp: {0x40005319c0 linux}"
	Jul 22 11:25:57 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:57Z" level=error msg="ContainerStats resp: {0x4000862c80 linux}"
	Jul 22 11:25:57 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:25:57Z" level=error msg="ContainerStats resp: {0x4000826840 linux}"
	Jul 22 11:26:01 running-upgrade-724000 cri-dockerd[3070]: time="2024-07-22T11:26:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	414a97f84c02d       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   41ac3313969f2
	913c1ba3d0cfa       edaa71f2aee88       19 seconds ago      Running             coredns                   2                   9640c77da33d6
	11f612391bb55       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   41ac3313969f2
	3aa1fabe8d3d8       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   9640c77da33d6
	812f238bbb819       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   357f36b2b7486
	4b4fab9674048       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   2cb3d07549db2
	19fea8cb2f864       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   8ffebb3d466f5
	e86dcf4cf2ade       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   1c7fadfd10848
	a443754c59365       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   e6895aecac3c9
	ff0a72834be9e       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   ba7140b1ecf4a
	
	
	==> coredns [11f612391bb5] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:51999->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:41592->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:53437->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:56093->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:40990->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:49324->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:52966->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:56998->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:54437->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5363651591966000034.5731852947976359494. HINFO: read udp 10.244.0.2:39912->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [3aa1fabe8d3d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:35657->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:49076->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:36915->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:41054->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:43978->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:41546->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:56546->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:33960->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:38055->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7149194600647420036.3483167969393561170. HINFO: read udp 10.244.0.3:46444->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [414a97f84c02] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 9042623279896348071.760707399089710169. HINFO: read udp 10.244.0.2:42880->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9042623279896348071.760707399089710169. HINFO: read udp 10.244.0.2:58310->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9042623279896348071.760707399089710169. HINFO: read udp 10.244.0.2:44774->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9042623279896348071.760707399089710169. HINFO: read udp 10.244.0.2:44419->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 9042623279896348071.760707399089710169. HINFO: read udp 10.244.0.2:47713->10.0.2.3:53: i/o timeout
	
	
	==> coredns [913c1ba3d0cf] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7789787701681211760.6236463894831356277. HINFO: read udp 10.244.0.3:59864->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7789787701681211760.6236463894831356277. HINFO: read udp 10.244.0.3:59346->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7789787701681211760.6236463894831356277. HINFO: read udp 10.244.0.3:53495->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7789787701681211760.6236463894831356277. HINFO: read udp 10.244.0.3:34094->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7789787701681211760.6236463894831356277. HINFO: read udp 10.244.0.3:40234->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-724000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-724000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=running-upgrade-724000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T04_21_42_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:21:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-724000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:25:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:21:42 +0000   Mon, 22 Jul 2024 11:21:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:21:42 +0000   Mon, 22 Jul 2024 11:21:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:21:42 +0000   Mon, 22 Jul 2024 11:21:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:21:42 +0000   Mon, 22 Jul 2024 11:21:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-724000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d929b33098b4ed0be92894af2c7d088
	  System UUID:                3d929b33098b4ed0be92894af2c7d088
	  Boot ID:                    ed393869-5304-4394-8146-d82925650491
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7zfbb                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 coredns-6d4b75cb6d-vpxsc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m8s
	  kube-system                 etcd-running-upgrade-724000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-apiserver-running-upgrade-724000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-724000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m22s
	  kube-system                 kube-proxy-t6l9d                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-724000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m5s   kube-proxy       
	  Normal  NodeReady                4m21s  kubelet          Node running-upgrade-724000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m21s  kubelet          Node running-upgrade-724000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s  kubelet          Node running-upgrade-724000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s  kubelet          Node running-upgrade-724000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m21s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m9s   node-controller  Node running-upgrade-724000 event: Registered Node running-upgrade-724000 in Controller
	
	
	==> dmesg <==
	[  +1.640371] systemd-fstab-generator[885]: Ignoring "noauto" for root device
	[  +0.073368] systemd-fstab-generator[896]: Ignoring "noauto" for root device
	[  +0.088565] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[  +1.139689] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.082856] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +0.081333] systemd-fstab-generator[1069]: Ignoring "noauto" for root device
	[  +2.206317] systemd-fstab-generator[1297]: Ignoring "noauto" for root device
	[Jul22 11:17] systemd-fstab-generator[1847]: Ignoring "noauto" for root device
	[  +2.791948] systemd-fstab-generator[2210]: Ignoring "noauto" for root device
	[  +0.156432] systemd-fstab-generator[2245]: Ignoring "noauto" for root device
	[  +0.091037] systemd-fstab-generator[2256]: Ignoring "noauto" for root device
	[  +0.094746] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[ +13.300676] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.219929] systemd-fstab-generator[3025]: Ignoring "noauto" for root device
	[  +0.079998] systemd-fstab-generator[3038]: Ignoring "noauto" for root device
	[  +0.077854] systemd-fstab-generator[3049]: Ignoring "noauto" for root device
	[  +0.099320] systemd-fstab-generator[3063]: Ignoring "noauto" for root device
	[  +2.299185] systemd-fstab-generator[3213]: Ignoring "noauto" for root device
	[  +3.570669] systemd-fstab-generator[3861]: Ignoring "noauto" for root device
	[  +1.896071] systemd-fstab-generator[4274]: Ignoring "noauto" for root device
	[ +18.020009] kauditd_printk_skb: 68 callbacks suppressed
	[Jul22 11:18] kauditd_printk_skb: 19 callbacks suppressed
	[Jul22 11:21] systemd-fstab-generator[10679]: Ignoring "noauto" for root device
	[  +6.136982] systemd-fstab-generator[11299]: Ignoring "noauto" for root device
	[  +0.471937] systemd-fstab-generator[11430]: Ignoring "noauto" for root device
	
	
	==> etcd [a443754c5936] <==
	{"level":"info","ts":"2024-07-22T11:21:37.420Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-07-22T11:21:37.420Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-07-22T11:21:37.420Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-22T11:21:37.421Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-22T11:21:37.421Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-22T11:21:37.421Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-22T11:21:37.421Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-07-22T11:21:38.398Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:38.399Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:38.399Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:38.399Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-22T11:21:38.399Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-724000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T11:21:38.399Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:21:38.399Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-22T11:21:38.400Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T11:21:38.400Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-07-22T11:21:38.400Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T11:21:38.400Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 11:26:03 up 9 min,  0 users,  load average: 0.07, 0.19, 0.11
	Linux running-upgrade-724000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [ff0a72834be9] <==
	I0722 11:21:39.603358       1 controller.go:611] quota admission added evaluator for: namespaces
	I0722 11:21:39.655606       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0722 11:21:39.658905       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0722 11:21:39.658945       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 11:21:39.658950       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0722 11:21:39.668037       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 11:21:39.668052       1 cache.go:39] Caches are synced for autoregister controller
	I0722 11:21:40.389763       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0722 11:21:40.563234       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0722 11:21:40.565798       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0722 11:21:40.565878       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 11:21:40.689555       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0722 11:21:40.700654       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0722 11:21:40.808182       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0722 11:21:40.810304       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0722 11:21:40.810698       1 controller.go:611] quota admission added evaluator for: endpoints
	I0722 11:21:40.811975       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 11:21:41.700163       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0722 11:21:42.255222       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0722 11:21:42.258195       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0722 11:21:42.273444       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0722 11:21:42.317862       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 11:21:55.303215       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0722 11:21:55.452685       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0722 11:21:57.476117       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [e86dcf4cf2ad] <==
	I0722 11:21:54.547634       1 shared_informer.go:262] Caches are synced for stateful set
	I0722 11:21:54.547652       1 shared_informer.go:262] Caches are synced for PV protection
	I0722 11:21:54.547700       1 shared_informer.go:262] Caches are synced for ephemeral
	I0722 11:21:54.547615       1 shared_informer.go:262] Caches are synced for cronjob
	I0722 11:21:54.548247       1 shared_informer.go:262] Caches are synced for PVC protection
	I0722 11:21:54.549926       1 shared_informer.go:262] Caches are synced for GC
	I0722 11:21:54.555897       1 shared_informer.go:262] Caches are synced for namespace
	I0722 11:21:54.597268       1 shared_informer.go:262] Caches are synced for daemon sets
	I0722 11:21:54.597276       1 shared_informer.go:262] Caches are synced for taint
	I0722 11:21:54.597352       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0722 11:21:54.597402       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-724000. Assuming now as a timestamp.
	I0722 11:21:54.597453       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0722 11:21:54.597618       1 event.go:294] "Event occurred" object="running-upgrade-724000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-724000 event: Registered Node running-upgrade-724000 in Controller"
	I0722 11:21:54.597662       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0722 11:21:54.621973       1 shared_informer.go:262] Caches are synced for persistent volume
	I0722 11:21:54.698345       1 shared_informer.go:262] Caches are synced for attach detach
	I0722 11:21:54.737136       1 shared_informer.go:262] Caches are synced for resource quota
	I0722 11:21:54.753625       1 shared_informer.go:262] Caches are synced for resource quota
	I0722 11:21:55.166966       1 shared_informer.go:262] Caches are synced for garbage collector
	I0722 11:21:55.197318       1 shared_informer.go:262] Caches are synced for garbage collector
	I0722 11:21:55.197340       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0722 11:21:55.304789       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0722 11:21:55.456467       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-t6l9d"
	I0722 11:21:55.555024       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-vpxsc"
	I0722 11:21:55.560488       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-7zfbb"
	
	
	==> kube-proxy [812f238bbb81] <==
	I0722 11:21:57.464300       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0722 11:21:57.464328       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0722 11:21:57.464339       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0722 11:21:57.474241       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0722 11:21:57.474251       1 server_others.go:206] "Using iptables Proxier"
	I0722 11:21:57.474275       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0722 11:21:57.474394       1 server.go:661] "Version info" version="v1.24.1"
	I0722 11:21:57.474442       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 11:21:57.474761       1 config.go:317] "Starting service config controller"
	I0722 11:21:57.474770       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0722 11:21:57.474808       1 config.go:226] "Starting endpoint slice config controller"
	I0722 11:21:57.474815       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0722 11:21:57.475090       1 config.go:444] "Starting node config controller"
	I0722 11:21:57.475114       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0722 11:21:57.576199       1 shared_informer.go:262] Caches are synced for node config
	I0722 11:21:57.576221       1 shared_informer.go:262] Caches are synced for service config
	I0722 11:21:57.576230       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [19fea8cb2f86] <==
	W0722 11:21:39.600884       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 11:21:39.600895       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 11:21:39.600910       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:21:39.600954       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 11:21:39.600990       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 11:21:39.601020       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:21:39.601073       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:21:39.601023       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 11:21:40.440290       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 11:21:40.440350       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 11:21:40.451879       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 11:21:40.451888       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 11:21:40.486991       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 11:21:40.487026       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 11:21:40.487064       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 11:21:40.487083       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 11:21:40.493395       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 11:21:40.493414       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 11:21:40.508304       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 11:21:40.508321       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 11:21:40.607485       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 11:21:40.607522       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 11:21:40.638285       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 11:21:40.638304       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 11:21:42.596214       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-07-22 11:16:38 UTC, ends at Mon 2024-07-22 11:26:03 UTC. --
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: I0722 11:21:54.600371   11305 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: I0722 11:21:54.600924   11305 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: I0722 11:21:54.607057   11305 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: I0722 11:21:54.801390   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd24t\" (UniqueName: \"kubernetes.io/projected/e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9-kube-api-access-nd24t\") pod \"storage-provisioner\" (UID: \"e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9\") " pod="kube-system/storage-provisioner"
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: I0722 11:21:54.801417   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9-tmp\") pod \"storage-provisioner\" (UID: \"e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9\") " pod="kube-system/storage-provisioner"
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: E0722 11:21:54.905777   11305 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: E0722 11:21:54.905819   11305 projected.go:192] Error preparing data for projected volume kube-api-access-nd24t for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Jul 22 11:21:54 running-upgrade-724000 kubelet[11305]: E0722 11:21:54.905857   11305 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9-kube-api-access-nd24t podName:e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9 nodeName:}" failed. No retries permitted until 2024-07-22 11:21:55.405844211 +0000 UTC m=+13.159914772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nd24t" (UniqueName: "kubernetes.io/projected/e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9-kube-api-access-nd24t") pod "storage-provisioner" (UID: "e6e90ba8-a9fe-4d13-a9cd-ad293d4490b9") : configmap "kube-root-ca.crt" not found
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.459507   11305 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: W0722 11:21:55.461534   11305 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: E0722 11:21:55.461602   11305 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-724000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-724000' and this object
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.506056   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e19d028-0b27-475c-af63-c8132ba327b9-kube-proxy\") pod \"kube-proxy-t6l9d\" (UID: \"4e19d028-0b27-475c-af63-c8132ba327b9\") " pod="kube-system/kube-proxy-t6l9d"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.556630   11305 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.569759   11305 topology_manager.go:200] "Topology Admit Handler"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.607178   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e19d028-0b27-475c-af63-c8132ba327b9-lib-modules\") pod \"kube-proxy-t6l9d\" (UID: \"4e19d028-0b27-475c-af63-c8132ba327b9\") " pod="kube-system/kube-proxy-t6l9d"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.607257   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e19d028-0b27-475c-af63-c8132ba327b9-xtables-lock\") pod \"kube-proxy-t6l9d\" (UID: \"4e19d028-0b27-475c-af63-c8132ba327b9\") " pod="kube-system/kube-proxy-t6l9d"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.607272   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9p74\" (UniqueName: \"kubernetes.io/projected/4e19d028-0b27-475c-af63-c8132ba327b9-kube-api-access-f9p74\") pod \"kube-proxy-t6l9d\" (UID: \"4e19d028-0b27-475c-af63-c8132ba327b9\") " pod="kube-system/kube-proxy-t6l9d"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.707703   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmk2\" (UniqueName: \"kubernetes.io/projected/a850ee74-42b1-4bda-b30e-0affff039cac-kube-api-access-zcmk2\") pod \"coredns-6d4b75cb6d-7zfbb\" (UID: \"a850ee74-42b1-4bda-b30e-0affff039cac\") " pod="kube-system/coredns-6d4b75cb6d-7zfbb"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.707746   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11744824-8508-4fbf-88a0-96ab08948aac-config-volume\") pod \"coredns-6d4b75cb6d-vpxsc\" (UID: \"11744824-8508-4fbf-88a0-96ab08948aac\") " pod="kube-system/coredns-6d4b75cb6d-vpxsc"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.707757   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc5ns\" (UniqueName: \"kubernetes.io/projected/11744824-8508-4fbf-88a0-96ab08948aac-kube-api-access-zc5ns\") pod \"coredns-6d4b75cb6d-vpxsc\" (UID: \"11744824-8508-4fbf-88a0-96ab08948aac\") " pod="kube-system/coredns-6d4b75cb6d-vpxsc"
	Jul 22 11:21:55 running-upgrade-724000 kubelet[11305]: I0722 11:21:55.707767   11305 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a850ee74-42b1-4bda-b30e-0affff039cac-config-volume\") pod \"coredns-6d4b75cb6d-7zfbb\" (UID: \"a850ee74-42b1-4bda-b30e-0affff039cac\") " pod="kube-system/coredns-6d4b75cb6d-7zfbb"
	Jul 22 11:21:56 running-upgrade-724000 kubelet[11305]: E0722 11:21:56.607982   11305 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Jul 22 11:21:56 running-upgrade-724000 kubelet[11305]: E0722 11:21:56.608046   11305 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/4e19d028-0b27-475c-af63-c8132ba327b9-kube-proxy podName:4e19d028-0b27-475c-af63-c8132ba327b9 nodeName:}" failed. No retries permitted until 2024-07-22 11:21:57.108022106 +0000 UTC m=+14.862092667 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/4e19d028-0b27-475c-af63-c8132ba327b9-kube-proxy") pod "kube-proxy-t6l9d" (UID: "4e19d028-0b27-475c-af63-c8132ba327b9") : failed to sync configmap cache: timed out waiting for the condition
	Jul 22 11:25:43 running-upgrade-724000 kubelet[11305]: I0722 11:25:43.767842   11305 scope.go:110] "RemoveContainer" containerID="cc88e2e59cc95addd64c622914b3301eb98a65287728077988746101756a1196"
	Jul 22 11:25:43 running-upgrade-724000 kubelet[11305]: I0722 11:25:43.779574   11305 scope.go:110] "RemoveContainer" containerID="f695590f14ba44beaa7d6a0d62aa243ea224ef2ea52ec3d8ae79f3376700b756"
	
	
	==> storage-provisioner [4b4fab967404] <==
	I0722 11:21:55.699664       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0722 11:21:55.703689       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0722 11:21:55.703756       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0722 11:21:55.706637       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0722 11:21:55.706777       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-724000_fd1a34e1-e99a-4b9d-9940-a59a844187a5!
	I0722 11:21:55.706991       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"89542ae9-6619-4aaa-bbf3-e0fadb8d1178", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-724000_fd1a34e1-e99a-4b9d-9940-a59a844187a5 became leader
	I0722 11:21:55.807963       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-724000_fd1a34e1-e99a-4b9d-9940-a59a844187a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-724000 -n running-upgrade-724000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-724000 -n running-upgrade-724000: exit status 2 (15.687742416s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-724000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-724000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-724000
--- FAIL: TestRunningBinaryUpgrade (613.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.53s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-682000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-682000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.71481325s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-682000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-682000" primary control-plane node in "kubernetes-upgrade-682000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-682000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:19:06.127799    4643 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:19:06.127925    4643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:06.127928    4643 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:06.127930    4643 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:06.128061    4643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:19:06.129174    4643 out.go:298] Setting JSON to false
	I0722 04:19:06.145198    4643 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4715,"bootTime":1721642431,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:19:06.145266    4643 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:19:06.150602    4643 out.go:177] * [kubernetes-upgrade-682000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:19:06.158454    4643 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:19:06.158488    4643 notify.go:220] Checking for updates...
	I0722 04:19:06.165665    4643 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:19:06.166962    4643 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:19:06.170541    4643 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:19:06.173573    4643 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:19:06.174858    4643 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:19:06.177876    4643 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:19:06.177949    4643 config.go:182] Loaded profile config "running-upgrade-724000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:19:06.178000    4643 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:19:06.181532    4643 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:19:06.186593    4643 start.go:297] selected driver: qemu2
	I0722 04:19:06.186601    4643 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:19:06.186609    4643 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:19:06.188833    4643 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:19:06.192580    4643 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:19:06.193914    4643 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 04:19:06.193941    4643 cni.go:84] Creating CNI manager for ""
	I0722 04:19:06.193948    4643 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0722 04:19:06.193972    4643 start.go:340] cluster config:
	{Name:kubernetes-upgrade-682000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-682000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:19:06.197526    4643 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:19:06.206592    4643 out.go:177] * Starting "kubernetes-upgrade-682000" primary control-plane node in "kubernetes-upgrade-682000" cluster
	I0722 04:19:06.210508    4643 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 04:19:06.210522    4643 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0722 04:19:06.210531    4643 cache.go:56] Caching tarball of preloaded images
	I0722 04:19:06.210585    4643 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:19:06.210590    4643 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0722 04:19:06.210652    4643 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/kubernetes-upgrade-682000/config.json ...
	I0722 04:19:06.210665    4643 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/kubernetes-upgrade-682000/config.json: {Name:mkf5a104825964f61b47b53eeaabed6edef54b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:19:06.211022    4643 start.go:360] acquireMachinesLock for kubernetes-upgrade-682000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:19:06.211058    4643 start.go:364] duration metric: took 28.083µs to acquireMachinesLock for "kubernetes-upgrade-682000"
	I0722 04:19:06.211069    4643 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-682000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:19:06.211093    4643 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:19:06.219465    4643 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:19:06.245245    4643 start.go:159] libmachine.API.Create for "kubernetes-upgrade-682000" (driver="qemu2")
	I0722 04:19:06.245283    4643 client.go:168] LocalClient.Create starting
	I0722 04:19:06.245349    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:19:06.245382    4643 main.go:141] libmachine: Decoding PEM data...
	I0722 04:19:06.245391    4643 main.go:141] libmachine: Parsing certificate...
	I0722 04:19:06.245428    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:19:06.245450    4643 main.go:141] libmachine: Decoding PEM data...
	I0722 04:19:06.245460    4643 main.go:141] libmachine: Parsing certificate...
	I0722 04:19:06.245889    4643 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:19:06.387556    4643 main.go:141] libmachine: Creating SSH key...
	I0722 04:19:06.468266    4643 main.go:141] libmachine: Creating Disk image...
	I0722 04:19:06.468273    4643 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:19:06.468473    4643 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:06.478332    4643 main.go:141] libmachine: STDOUT: 
	I0722 04:19:06.478355    4643 main.go:141] libmachine: STDERR: 
	I0722 04:19:06.478409    4643 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2 +20000M
	I0722 04:19:06.486458    4643 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:19:06.486473    4643 main.go:141] libmachine: STDERR: 
	I0722 04:19:06.486501    4643 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:06.486511    4643 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:19:06.486523    4643 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:19:06.486561    4643 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:a2:cc:0a:e1:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:06.488197    4643 main.go:141] libmachine: STDOUT: 
	I0722 04:19:06.488218    4643 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:19:06.488239    4643 client.go:171] duration metric: took 243.1785ms to LocalClient.Create
	I0722 04:19:08.488734    4643 start.go:128] duration metric: took 2.279600291s to createHost
	I0722 04:19:08.488796    4643 start.go:83] releasing machines lock for "kubernetes-upgrade-682000", held for 2.279694625s
	W0722 04:19:08.488852    4643 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:19:08.496237    4643 out.go:177] * Deleting "kubernetes-upgrade-682000" in qemu2 ...
	W0722 04:19:08.516035    4643 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:19:08.516055    4643 start.go:729] Will try again in 5 seconds ...
	I0722 04:19:13.514734    4643 start.go:360] acquireMachinesLock for kubernetes-upgrade-682000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:19:13.514914    4643 start.go:364] duration metric: took 122.084µs to acquireMachinesLock for "kubernetes-upgrade-682000"
	I0722 04:19:13.514974    4643 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-682000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:19:13.515041    4643 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:19:13.521566    4643 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:19:13.544631    4643 start.go:159] libmachine.API.Create for "kubernetes-upgrade-682000" (driver="qemu2")
	I0722 04:19:13.544761    4643 client.go:168] LocalClient.Create starting
	I0722 04:19:13.544832    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:19:13.544872    4643 main.go:141] libmachine: Decoding PEM data...
	I0722 04:19:13.544881    4643 main.go:141] libmachine: Parsing certificate...
	I0722 04:19:13.544930    4643 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:19:13.544955    4643 main.go:141] libmachine: Decoding PEM data...
	I0722 04:19:13.544963    4643 main.go:141] libmachine: Parsing certificate...
	I0722 04:19:13.545465    4643 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:19:13.687358    4643 main.go:141] libmachine: Creating SSH key...
	I0722 04:19:13.755537    4643 main.go:141] libmachine: Creating Disk image...
	I0722 04:19:13.755543    4643 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:19:13.755744    4643 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:13.765249    4643 main.go:141] libmachine: STDOUT: 
	I0722 04:19:13.765276    4643 main.go:141] libmachine: STDERR: 
	I0722 04:19:13.765361    4643 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2 +20000M
	I0722 04:19:13.773381    4643 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:19:13.773397    4643 main.go:141] libmachine: STDERR: 
	I0722 04:19:13.773409    4643 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:13.773414    4643 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:19:13.773444    4643 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:19:13.773473    4643 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:82:ad:f4:30:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:13.775169    4643 main.go:141] libmachine: STDOUT: 
	I0722 04:19:13.775187    4643 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:19:13.775200    4643 client.go:171] duration metric: took 230.570834ms to LocalClient.Create
	I0722 04:19:15.776186    4643 start.go:128] duration metric: took 2.262373875s to createHost
	I0722 04:19:15.776212    4643 start.go:83] releasing machines lock for "kubernetes-upgrade-682000", held for 2.262516875s
	W0722 04:19:15.776307    4643 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-682000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-682000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:19:15.785610    4643 out.go:177] 
	W0722 04:19:15.789607    4643 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:19:15.789623    4643 out.go:239] * 
	* 
	W0722 04:19:15.790222    4643 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:19:15.800577    4643 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-682000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-682000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-682000: (3.420037708s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-682000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-682000 status --format={{.Host}}: exit status 7 (58.046459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-682000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-682000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.187307333s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-682000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-682000" primary control-plane node in "kubernetes-upgrade-682000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-682000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-682000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:19:19.316045    4683 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:19:19.316208    4683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:19.316211    4683 out.go:304] Setting ErrFile to fd 2...
	I0722 04:19:19.316214    4683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:19:19.316341    4683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:19:19.317410    4683 out.go:298] Setting JSON to false
	I0722 04:19:19.333520    4683 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4728,"bootTime":1721642431,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:19:19.333589    4683 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:19:19.339084    4683 out.go:177] * [kubernetes-upgrade-682000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:19:19.346927    4683 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:19:19.346985    4683 notify.go:220] Checking for updates...
	I0722 04:19:19.353890    4683 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:19:19.356972    4683 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:19:19.359972    4683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:19:19.361433    4683 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:19:19.368990    4683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:19:19.373070    4683 config.go:182] Loaded profile config "kubernetes-upgrade-682000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0722 04:19:19.373349    4683 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:19:19.377949    4683 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:19:19.384920    4683 start.go:297] selected driver: qemu2
	I0722 04:19:19.384925    4683 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-682000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:19:19.384975    4683 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:19:19.387171    4683 cni.go:84] Creating CNI manager for ""
	I0722 04:19:19.387188    4683 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:19:19.387210    4683 start.go:340] cluster config:
	{Name:kubernetes-upgrade-682000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-682000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:19:19.390352    4683 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:19:19.398729    4683 out.go:177] * Starting "kubernetes-upgrade-682000" primary control-plane node in "kubernetes-upgrade-682000" cluster
	I0722 04:19:19.402899    4683 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 04:19:19.402914    4683 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0722 04:19:19.402927    4683 cache.go:56] Caching tarball of preloaded images
	I0722 04:19:19.402981    4683 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:19:19.402986    4683 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0722 04:19:19.403038    4683 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/kubernetes-upgrade-682000/config.json ...
	I0722 04:19:19.403531    4683 start.go:360] acquireMachinesLock for kubernetes-upgrade-682000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:19:19.403562    4683 start.go:364] duration metric: took 24.084µs to acquireMachinesLock for "kubernetes-upgrade-682000"
	I0722 04:19:19.403572    4683 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:19:19.403579    4683 fix.go:54] fixHost starting: 
	I0722 04:19:19.403696    4683 fix.go:112] recreateIfNeeded on kubernetes-upgrade-682000: state=Stopped err=<nil>
	W0722 04:19:19.403705    4683 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:19:19.411925    4683 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-682000" ...
	I0722 04:19:19.415942    4683 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:19:19.415979    4683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:82:ad:f4:30:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:19.417876    4683 main.go:141] libmachine: STDOUT: 
	I0722 04:19:19.417891    4683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:19:19.417916    4683 fix.go:56] duration metric: took 14.343833ms for fixHost
	I0722 04:19:19.417920    4683 start.go:83] releasing machines lock for "kubernetes-upgrade-682000", held for 14.36ms
	W0722 04:19:19.417926    4683 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:19:19.417963    4683 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:19:19.417967    4683 start.go:729] Will try again in 5 seconds ...
	I0722 04:19:24.418471    4683 start.go:360] acquireMachinesLock for kubernetes-upgrade-682000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:19:24.418948    4683 start.go:364] duration metric: took 365.916µs to acquireMachinesLock for "kubernetes-upgrade-682000"
	I0722 04:19:24.419093    4683 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:19:24.419116    4683 fix.go:54] fixHost starting: 
	I0722 04:19:24.419904    4683 fix.go:112] recreateIfNeeded on kubernetes-upgrade-682000: state=Stopped err=<nil>
	W0722 04:19:24.419930    4683 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:19:24.423637    4683 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-682000" ...
	I0722 04:19:24.431386    4683 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:19:24.431628    4683 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:82:ad:f4:30:7e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubernetes-upgrade-682000/disk.qcow2
	I0722 04:19:24.440965    4683 main.go:141] libmachine: STDOUT: 
	I0722 04:19:24.441032    4683 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:19:24.441148    4683 fix.go:56] duration metric: took 22.040125ms for fixHost
	I0722 04:19:24.441175    4683 start.go:83] releasing machines lock for "kubernetes-upgrade-682000", held for 22.208666ms
	W0722 04:19:24.441364    4683 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-682000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:19:24.448364    4683 out.go:177] 
	W0722 04:19:24.452432    4683 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:19:24.452474    4683 out.go:239] * 
	* 
	W0722 04:19:24.454153    4683 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:19:24.461352    4683 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-682000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-682000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-682000 version --output=json: exit status 1 (48.703625ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-682000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-07-22 04:19:24.521194 -0700 PDT m=+3072.810376543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-682000 -n kubernetes-upgrade-682000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-682000 -n kubernetes-upgrade-682000: exit status 7 (30.414667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-682000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-682000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-682000
--- FAIL: TestKubernetesUpgrade (18.53s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.81s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19313
- KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3018496856/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.81s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.3s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19313
- KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1674488487/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (579.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2710228702 start -p stopped-upgrade-239000 --memory=2200 --vm-driver=qemu2 
E0722 04:19:47.467454    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2710228702 start -p stopped-upgrade-239000 --memory=2200 --vm-driver=qemu2 : (52.243883667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2710228702 -p stopped-upgrade-239000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2710228702 -p stopped-upgrade-239000 stop: (3.1107375s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-239000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0722 04:22:48.078264    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 04:24:47.461501    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-239000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.786925833s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-239000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-239000" primary control-plane node in "stopped-upgrade-239000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-239000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:20:22.156403    4749 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:20:22.156560    4749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:22.156564    4749 out.go:304] Setting ErrFile to fd 2...
	I0722 04:20:22.156567    4749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:20:22.156711    4749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:20:22.157906    4749 out.go:298] Setting JSON to false
	I0722 04:20:22.177173    4749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4791,"bootTime":1721642431,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:20:22.177251    4749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:20:22.182980    4749 out.go:177] * [stopped-upgrade-239000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:20:22.190974    4749 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:20:22.190998    4749 notify.go:220] Checking for updates...
	I0722 04:20:22.198967    4749 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:20:22.201941    4749 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:20:22.205991    4749 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:20:22.209021    4749 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:20:22.212033    4749 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:20:22.215316    4749 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:20:22.218947    4749 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0722 04:20:22.221996    4749 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:20:22.225984    4749 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:20:22.232883    4749 start.go:297] selected driver: qemu2
	I0722 04:20:22.232889    4749 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:20:22.232934    4749 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:20:22.235389    4749 cni.go:84] Creating CNI manager for ""
	I0722 04:20:22.235407    4749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:20:22.235437    4749 start.go:340] cluster config:
	{Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:20:22.235488    4749 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:20:22.242911    4749 out.go:177] * Starting "stopped-upgrade-239000" primary control-plane node in "stopped-upgrade-239000" cluster
	I0722 04:20:22.246989    4749 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0722 04:20:22.247005    4749 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0722 04:20:22.247016    4749 cache.go:56] Caching tarball of preloaded images
	I0722 04:20:22.247071    4749 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:20:22.247077    4749 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0722 04:20:22.247147    4749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/config.json ...
	I0722 04:20:22.247628    4749 start.go:360] acquireMachinesLock for stopped-upgrade-239000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:20:22.247656    4749 start.go:364] duration metric: took 21.917µs to acquireMachinesLock for "stopped-upgrade-239000"
	I0722 04:20:22.247664    4749 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:20:22.247669    4749 fix.go:54] fixHost starting: 
	I0722 04:20:22.247773    4749 fix.go:112] recreateIfNeeded on stopped-upgrade-239000: state=Stopped err=<nil>
	W0722 04:20:22.247782    4749 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:20:22.254948    4749 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-239000" ...
	I0722 04:20:22.258974    4749 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:20:22.259038    4749 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50430-:22,hostfwd=tcp::50431-:2376,hostname=stopped-upgrade-239000 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/disk.qcow2
	I0722 04:20:22.304024    4749 main.go:141] libmachine: STDOUT: 
	I0722 04:20:22.304050    4749 main.go:141] libmachine: STDERR: 
	I0722 04:20:22.304056    4749 main.go:141] libmachine: Waiting for VM to start (ssh -p 50430 docker@127.0.0.1)...
	I0722 04:20:42.194511    4749 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/config.json ...
	I0722 04:20:42.195261    4749 machine.go:94] provisionDockerMachine start ...
	I0722 04:20:42.195483    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.195947    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.195961    4749 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 04:20:42.271873    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 04:20:42.271900    4749 buildroot.go:166] provisioning hostname "stopped-upgrade-239000"
	I0722 04:20:42.272217    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.272416    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.272433    4749 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-239000 && echo "stopped-upgrade-239000" | sudo tee /etc/hostname
	I0722 04:20:42.337458    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-239000
	
	I0722 04:20:42.337514    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.337646    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.337655    4749 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-239000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-239000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-239000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 04:20:42.394331    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:20:42.394343    4749 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1127/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1127/.minikube}
	I0722 04:20:42.394355    4749 buildroot.go:174] setting up certificates
	I0722 04:20:42.394360    4749 provision.go:84] configureAuth start
	I0722 04:20:42.394366    4749 provision.go:143] copyHostCerts
	I0722 04:20:42.394439    4749 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem, removing ...
	I0722 04:20:42.394445    4749 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem
	I0722 04:20:42.394548    4749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.pem (1078 bytes)
	I0722 04:20:42.394753    4749 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem, removing ...
	I0722 04:20:42.394757    4749 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem
	I0722 04:20:42.394807    4749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/cert.pem (1123 bytes)
	I0722 04:20:42.394921    4749 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem, removing ...
	I0722 04:20:42.394924    4749 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem
	I0722 04:20:42.394972    4749 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1127/.minikube/key.pem (1675 bytes)
	I0722 04:20:42.395062    4749 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-239000 san=[127.0.0.1 localhost minikube stopped-upgrade-239000]
	I0722 04:20:42.476373    4749 provision.go:177] copyRemoteCerts
	I0722 04:20:42.476418    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 04:20:42.476427    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:20:42.506384    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 04:20:42.513197    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0722 04:20:42.520655    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 04:20:42.527994    4749 provision.go:87] duration metric: took 133.624125ms to configureAuth
	I0722 04:20:42.528005    4749 buildroot.go:189] setting minikube options for container-runtime
	I0722 04:20:42.528110    4749 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:20:42.528148    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.528236    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.528241    4749 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 04:20:42.584438    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 04:20:42.584446    4749 buildroot.go:70] root file system type: tmpfs
	I0722 04:20:42.584494    4749 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 04:20:42.584536    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.584669    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.584701    4749 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 04:20:42.642465    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 04:20:42.642517    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:42.642629    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:42.642637    4749 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 04:20:43.014601    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 04:20:43.014614    4749 machine.go:97] duration metric: took 819.359375ms to provisionDockerMachine
	I0722 04:20:43.014621    4749 start.go:293] postStartSetup for "stopped-upgrade-239000" (driver="qemu2")
	I0722 04:20:43.014628    4749 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 04:20:43.014688    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 04:20:43.014696    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:20:43.044094    4749 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 04:20:43.045510    4749 info.go:137] Remote host: Buildroot 2021.02.12
	I0722 04:20:43.045517    4749 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/addons for local assets ...
	I0722 04:20:43.045605    4749 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1127/.minikube/files for local assets ...
	I0722 04:20:43.045728    4749 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem -> 16182.pem in /etc/ssl/certs
	I0722 04:20:43.045861    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 04:20:43.048873    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /etc/ssl/certs/16182.pem (1708 bytes)
	I0722 04:20:43.056153    4749 start.go:296] duration metric: took 41.527584ms for postStartSetup
	I0722 04:20:43.056165    4749 fix.go:56] duration metric: took 20.808920417s for fixHost
	I0722 04:20:43.056196    4749 main.go:141] libmachine: Using SSH client type: native
	I0722 04:20:43.056310    4749 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c5aa10] 0x100c5d270 <nil>  [] 0s} localhost 50430 <nil> <nil>}
	I0722 04:20:43.056314    4749 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 04:20:43.110699    4749 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721647242.983111796
	
	I0722 04:20:43.110706    4749 fix.go:216] guest clock: 1721647242.983111796
	I0722 04:20:43.110710    4749 fix.go:229] Guest: 2024-07-22 04:20:42.983111796 -0700 PDT Remote: 2024-07-22 04:20:43.056167 -0700 PDT m=+20.928665668 (delta=-73.055204ms)
	I0722 04:20:43.110724    4749 fix.go:200] guest clock delta is within tolerance: -73.055204ms
	I0722 04:20:43.110727    4749 start.go:83] releasing machines lock for "stopped-upgrade-239000", held for 20.863491167s
	I0722 04:20:43.110782    4749 ssh_runner.go:195] Run: cat /version.json
	I0722 04:20:43.110791    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:20:43.110783    4749 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 04:20:43.110821    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	W0722 04:20:43.111313    4749 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50430: connect: connection refused
	I0722 04:20:43.111336    4749 retry.go:31] will retry after 360.026592ms: dial tcp [::1]:50430: connect: connection refused
	W0722 04:20:43.527654    4749 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0722 04:20:43.527771    4749 ssh_runner.go:195] Run: systemctl --version
	I0722 04:20:43.530867    4749 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 04:20:43.533745    4749 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 04:20:43.533798    4749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0722 04:20:43.538337    4749 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0722 04:20:43.544872    4749 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 04:20:43.544883    4749 start.go:495] detecting cgroup driver to use...
	I0722 04:20:43.544968    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:20:43.553325    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0722 04:20:43.556925    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 04:20:43.560785    4749 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 04:20:43.560819    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 04:20:43.564256    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:20:43.567514    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 04:20:43.570718    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:20:43.576093    4749 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 04:20:43.580028    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 04:20:43.583337    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 04:20:43.588665    4749 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 04:20:43.592465    4749 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 04:20:43.595593    4749 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 04:20:43.598324    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:43.674987    4749 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 04:20:43.680500    4749 start.go:495] detecting cgroup driver to use...
	I0722 04:20:43.680549    4749 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 04:20:43.687321    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:20:43.692803    4749 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 04:20:43.699792    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:20:43.704174    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:20:43.708545    4749 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 04:20:43.760373    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:20:43.765152    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:20:43.770399    4749 ssh_runner.go:195] Run: which cri-dockerd
	I0722 04:20:43.771771    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 04:20:43.774237    4749 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 04:20:43.778927    4749 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 04:20:43.859394    4749 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 04:20:43.935863    4749 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 04:20:43.935934    4749 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 04:20:43.941183    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:44.019717    4749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:20:45.180724    4749 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.161011292s)
	I0722 04:20:45.180794    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 04:20:45.185671    4749 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0722 04:20:45.191977    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:20:45.196685    4749 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 04:20:45.274732    4749 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 04:20:45.350368    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:45.425915    4749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 04:20:45.431805    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:20:45.436258    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:45.519430    4749 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 04:20:45.560302    4749 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 04:20:45.560383    4749 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 04:20:45.562521    4749 start.go:563] Will wait 60s for crictl version
	I0722 04:20:45.562579    4749 ssh_runner.go:195] Run: which crictl
	I0722 04:20:45.563892    4749 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 04:20:45.578200    4749 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0722 04:20:45.578272    4749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:20:45.594327    4749 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:20:45.616634    4749 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0722 04:20:45.616700    4749 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0722 04:20:45.618017    4749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 04:20:45.621466    4749 kubeadm.go:883] updating cluster {Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0722 04:20:45.621510    4749 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0722 04:20:45.621551    4749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:20:45.632225    4749 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 04:20:45.632234    4749 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0722 04:20:45.632285    4749 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 04:20:45.635691    4749 ssh_runner.go:195] Run: which lz4
	I0722 04:20:45.636993    4749 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0722 04:20:45.638231    4749 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0722 04:20:45.638242    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0722 04:20:46.569392    4749 docker.go:649] duration metric: took 932.444417ms to copy over tarball
	I0722 04:20:46.569449    4749 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0722 04:20:47.721624    4749 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.152182209s)
	I0722 04:20:47.721637    4749 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0722 04:20:47.739369    4749 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0722 04:20:47.742660    4749 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0722 04:20:47.748292    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:47.832190    4749 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:20:49.518186    4749 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.686005833s)
	I0722 04:20:49.518290    4749 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:20:49.532248    4749 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0722 04:20:49.532257    4749 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0722 04:20:49.532262    4749 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0722 04:20:49.537726    4749 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:49.539690    4749 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:49.541401    4749 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:49.541447    4749 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:49.543463    4749 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:49.543439    4749 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:49.544820    4749 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:49.544997    4749 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:49.546372    4749 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:49.546469    4749 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:49.547550    4749 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:49.547604    4749 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0722 04:20:49.548547    4749 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:49.548561    4749 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:49.549445    4749 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0722 04:20:49.550130    4749 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.010737    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:50.023272    4749 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0722 04:20:50.023298    4749 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:50.023348    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0722 04:20:50.026924    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:50.032296    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:50.034512    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0722 04:20:50.035853    4749 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0722 04:20:50.035970    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:50.036517    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:50.039697    4749 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0722 04:20:50.039716    4749 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:50.039757    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0722 04:20:50.047476    4749 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0722 04:20:50.047501    4749 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:50.047565    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0722 04:20:50.047605    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0722 04:20:50.061087    4749 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0722 04:20:50.061120    4749 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:50.061186    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0722 04:20:50.061455    4749 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0722 04:20:50.061465    4749 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:50.061485    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0722 04:20:50.073080    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0722 04:20:50.075697    4749 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0722 04:20:50.075703    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0722 04:20:50.075715    4749 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0722 04:20:50.075763    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0722 04:20:50.088008    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.092484    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0722 04:20:50.092492    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0722 04:20:50.092518    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0722 04:20:50.092605    4749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0722 04:20:50.092605    4749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0722 04:20:50.101416    4749 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0722 04:20:50.101445    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0722 04:20:50.101452    4749 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0722 04:20:50.101463    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0722 04:20:50.101515    4749 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0722 04:20:50.101534    4749 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.101571    4749 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0722 04:20:50.117856    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0722 04:20:50.133159    4749 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0722 04:20:50.133182    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0722 04:20:50.175465    4749 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0722 04:20:50.175487    4749 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0722 04:20:50.175493    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0722 04:20:50.211543    4749 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0722 04:20:52.561556    4749 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0722 04:20:52.561718    4749 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:52.577409    4749 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0722 04:20:52.577438    4749 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:52.577506    4749 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:20:52.594066    4749 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0722 04:20:52.594180    4749 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0722 04:20:52.595705    4749 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0722 04:20:52.595715    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0722 04:20:52.627538    4749 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0722 04:20:52.627558    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0722 04:20:52.861356    4749 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0722 04:20:52.861392    4749 cache_images.go:92] duration metric: took 3.329182s to LoadCachedImages
	W0722 04:20:52.861438    4749 out.go:239] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0722 04:20:52.861444    4749 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0722 04:20:52.861497    4749 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-239000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 04:20:52.861576    4749 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 04:20:52.875384    4749 cni.go:84] Creating CNI manager for ""
	I0722 04:20:52.875397    4749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:20:52.875401    4749 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 04:20:52.875410    4749 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-239000 NodeName:stopped-upgrade-239000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 04:20:52.875470    4749 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-239000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 04:20:52.875522    4749 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0722 04:20:52.878343    4749 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 04:20:52.878375    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 04:20:52.881069    4749 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0722 04:20:52.885924    4749 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 04:20:52.890570    4749 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0722 04:20:52.895969    4749 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0722 04:20:52.897207    4749 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 04:20:52.900845    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:20:52.985574    4749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:20:52.991003    4749 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000 for IP: 10.0.2.15
	I0722 04:20:52.991010    4749 certs.go:194] generating shared ca certs ...
	I0722 04:20:52.991019    4749 certs.go:226] acquiring lock for ca certs: {Name:mk3f2c80d56e217629ae5cc59f1253ebc769d305 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:52.991188    4749 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key
	I0722 04:20:52.991240    4749 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key
	I0722 04:20:52.991248    4749 certs.go:256] generating profile certs ...
	I0722 04:20:52.991322    4749 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.key
	I0722 04:20:52.991346    4749 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0
	I0722 04:20:52.991360    4749 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0722 04:20:53.179011    4749 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0 ...
	I0722 04:20:53.179025    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0: {Name:mk320ab3e80faa0708703cf9e34fb5fa8d76946f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:53.179784    4749 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0 ...
	I0722 04:20:53.179790    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0: {Name:mk74e40d2b818fe75dad8d11f3f613fddec42567 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:53.179932    4749 certs.go:381] copying /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt.5038eef0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt
	I0722 04:20:53.180480    4749 certs.go:385] copying /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key.5038eef0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key
	I0722 04:20:53.180643    4749 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/proxy-client.key
	I0722 04:20:53.180781    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem (1338 bytes)
	W0722 04:20:53.180812    4749 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618_empty.pem, impossibly tiny 0 bytes
	I0722 04:20:53.180818    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca-key.pem (1679 bytes)
	I0722 04:20:53.180844    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem (1078 bytes)
	I0722 04:20:53.180869    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem (1123 bytes)
	I0722 04:20:53.180892    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/key.pem (1675 bytes)
	I0722 04:20:53.180947    4749 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem (1708 bytes)
	I0722 04:20:53.181306    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 04:20:53.188450    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 04:20:53.195829    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 04:20:53.202757    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0722 04:20:53.209054    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 04:20:53.216248    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0722 04:20:53.223501    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 04:20:53.230154    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 04:20:53.236873    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/ssl/certs/16182.pem --> /usr/share/ca-certificates/16182.pem (1708 bytes)
	I0722 04:20:53.244129    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 04:20:53.250938    4749 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/1618.pem --> /usr/share/ca-certificates/1618.pem (1338 bytes)
	I0722 04:20:53.257361    4749 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 04:20:53.262385    4749 ssh_runner.go:195] Run: openssl version
	I0722 04:20:53.264211    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 04:20:53.268105    4749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:20:53.269466    4749 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:20:53.269486    4749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:20:53.271271    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 04:20:53.274193    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1618.pem && ln -fs /usr/share/ca-certificates/1618.pem /etc/ssl/certs/1618.pem"
	I0722 04:20:53.277136    4749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1618.pem
	I0722 04:20:53.278530    4749 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:36 /usr/share/ca-certificates/1618.pem
	I0722 04:20:53.278555    4749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1618.pem
	I0722 04:20:53.280229    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1618.pem /etc/ssl/certs/51391683.0"
	I0722 04:20:53.283228    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16182.pem && ln -fs /usr/share/ca-certificates/16182.pem /etc/ssl/certs/16182.pem"
	I0722 04:20:53.286072    4749 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16182.pem
	I0722 04:20:53.287444    4749 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:36 /usr/share/ca-certificates/16182.pem
	I0722 04:20:53.287461    4749 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16182.pem
	I0722 04:20:53.289165    4749 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16182.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 04:20:53.292811    4749 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 04:20:53.294206    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 04:20:53.296153    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 04:20:53.298199    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 04:20:53.300043    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 04:20:53.301716    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 04:20:53.303398    4749 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 04:20:53.305111    4749 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50463 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0722 04:20:53.305180    4749 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:20:53.315723    4749 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 04:20:53.318742    4749 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 04:20:53.318748    4749 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 04:20:53.318766    4749 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 04:20:53.321622    4749 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:20:53.321933    4749 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-239000" does not appear in /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:20:53.322030    4749 kubeconfig.go:62] /Users/jenkins/minikube-integration/19313-1127/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-239000" cluster setting kubeconfig missing "stopped-upgrade-239000" context setting]
	I0722 04:20:53.322231    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:20:53.322686    4749 kapi.go:59] client config for stopped-upgrade-239000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fef790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:20:53.323006    4749 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 04:20:53.325765    4749 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-239000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0722 04:20:53.325772    4749 kubeadm.go:1160] stopping kube-system containers ...
	I0722 04:20:53.325810    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:20:53.336978    4749 docker.go:483] Stopping containers: [b242274d2995 82c7409ff149 107f02380e96 cdb2f02c95ca 6d3fe4f4d288 9673cbf4cea7 d58d89cd0382 38d038729737 286c0889019f]
	I0722 04:20:53.337044    4749 ssh_runner.go:195] Run: docker stop b242274d2995 82c7409ff149 107f02380e96 cdb2f02c95ca 6d3fe4f4d288 9673cbf4cea7 d58d89cd0382 38d038729737 286c0889019f
	I0722 04:20:53.347054    4749 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 04:20:53.352605    4749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:20:53.355705    4749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 04:20:53.355710    4749 kubeadm.go:157] found existing configuration files:
	
	I0722 04:20:53.355729    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf
	I0722 04:20:53.358131    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 04:20:53.358152    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 04:20:53.360749    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf
	I0722 04:20:53.363654    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 04:20:53.363675    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 04:20:53.366321    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf
	I0722 04:20:53.368801    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 04:20:53.368822    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:20:53.371716    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf
	I0722 04:20:53.374276    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 04:20:53.374315    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:20:53.377205    4749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:20:53.380637    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.404695    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.781762    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.911001    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.938129    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:20:53.965875    4749 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:20:53.965965    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:20:54.466017    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:20:54.968050    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:20:54.972469    4749 api_server.go:72] duration metric: took 1.006612792s to wait for apiserver process to appear ...
	I0722 04:20:54.972479    4749 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:20:54.972489    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:20:59.972752    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:20:59.972777    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:04.974405    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:04.974450    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:09.974647    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:09.974687    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:14.975058    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:14.975162    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:19.975907    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:19.976026    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:24.976939    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:24.976998    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:29.980058    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:29.980124    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:34.980864    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:34.980898    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:39.982888    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:39.982942    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:44.983362    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:44.983417    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:49.985704    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:49.985779    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:21:54.987962    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:21:54.988063    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:21:55.006929    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:21:55.007002    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:21:55.017157    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:21:55.017215    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:21:55.028406    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:21:55.028468    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:21:55.038984    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:21:55.039060    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:21:55.049164    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:21:55.049227    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:21:55.059826    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:21:55.059887    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:21:55.069868    4749 logs.go:276] 0 containers: []
	W0722 04:21:55.069880    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:21:55.069937    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:21:55.081751    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:21:55.081773    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:21:55.081780    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:21:55.103153    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:21:55.103163    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:21:55.125929    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:21:55.125946    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:21:55.214659    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:21:55.214673    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:21:55.229255    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:21:55.229270    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:21:55.270482    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:21:55.270493    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:21:55.303038    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:21:55.303050    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:21:55.314917    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:21:55.314929    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:21:55.340510    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:21:55.340525    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:21:55.352274    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:21:55.352285    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:21:55.376977    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:21:55.376991    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:21:55.389843    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:21:55.389856    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:21:55.401983    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:21:55.401995    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:21:55.414616    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:21:55.414627    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:21:55.429679    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:21:55.429693    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:21:55.441337    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:21:55.441349    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:21:55.446184    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:21:55.446192    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:21:57.964367    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:02.966640    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:02.967074    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:03.005043    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:03.005182    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:03.031272    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:03.031366    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:03.044441    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:03.044519    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:03.064669    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:03.064742    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:03.075517    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:03.075582    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:03.086347    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:03.086418    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:03.105588    4749 logs.go:276] 0 containers: []
	W0722 04:22:03.105599    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:03.105652    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:03.116627    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:03.116644    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:03.116650    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:03.130399    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:03.130410    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:03.157338    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:03.157347    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:03.171051    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:03.171061    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:03.182924    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:03.182935    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:03.200186    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:03.200201    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:03.212263    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:03.212273    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:03.223805    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:03.223817    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:03.235601    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:03.235611    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:03.239976    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:03.239984    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:03.251397    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:03.251408    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:03.274138    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:03.274149    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:03.288326    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:03.288337    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:03.303113    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:03.303123    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:03.317699    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:03.317734    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:03.355223    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:03.355235    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:03.390283    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:03.390294    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:05.917777    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:10.919965    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:10.920155    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:10.938805    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:10.938896    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:10.953198    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:10.953270    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:10.967837    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:10.967905    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:10.978787    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:10.978862    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:10.993254    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:10.993320    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:11.004891    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:11.004963    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:11.019929    4749 logs.go:276] 0 containers: []
	W0722 04:22:11.019941    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:11.020003    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:11.035203    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:11.035221    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:11.035227    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:11.053080    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:11.053091    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:11.077066    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:11.077074    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:11.115576    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:11.115587    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:11.141002    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:11.141012    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:11.154102    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:11.154116    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:11.158378    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:11.158385    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:11.172979    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:11.172990    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:11.184748    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:11.184758    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:11.198579    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:11.198590    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:11.209981    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:11.209997    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:11.256548    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:11.256559    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:11.270332    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:11.270341    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:11.293213    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:11.293229    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:11.304677    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:11.304691    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:11.316937    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:11.316951    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:11.331682    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:11.331698    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:13.844791    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:18.847086    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:18.847242    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:18.859723    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:18.859810    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:18.871433    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:18.871512    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:18.881592    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:18.881666    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:18.892057    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:18.892130    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:18.904756    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:18.904826    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:18.915072    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:18.915139    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:18.925450    4749 logs.go:276] 0 containers: []
	W0722 04:22:18.925461    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:18.925519    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:18.935473    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:18.935491    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:18.935497    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:18.939707    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:18.939716    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:18.954161    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:18.954172    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:18.967221    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:18.967232    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:18.979597    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:18.979608    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:18.993805    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:18.993815    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:19.005087    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:19.005097    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:19.029995    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:19.030006    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:19.044088    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:19.044097    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:19.065508    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:19.065519    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:19.078029    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:19.078040    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:19.092611    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:19.092624    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:19.109311    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:19.109324    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:19.120151    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:19.120162    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:19.145021    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:19.145029    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:19.184590    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:19.184601    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:19.219802    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:19.219816    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:21.736488    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:26.738696    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:26.738907    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:26.753605    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:26.753675    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:26.768180    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:26.768254    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:26.779317    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:26.779383    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:26.793586    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:26.793651    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:26.810845    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:26.810963    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:26.823398    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:26.823460    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:26.833600    4749 logs.go:276] 0 containers: []
	W0722 04:22:26.833613    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:26.833665    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:26.844223    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:26.844239    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:26.844245    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:26.870155    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:26.870163    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:26.881730    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:26.881746    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:26.917317    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:26.917332    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:26.931863    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:26.931872    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:26.944011    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:26.944022    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:26.965651    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:26.965663    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:26.983587    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:26.983596    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:26.995400    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:26.995409    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:26.999744    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:26.999751    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:27.013184    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:27.013193    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:27.024760    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:27.024771    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:27.063082    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:27.063090    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:27.091397    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:27.091405    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:27.105133    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:27.105147    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:27.118000    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:27.118010    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:27.130215    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:27.130229    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:29.646505    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:34.648715    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:34.648826    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:34.659877    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:34.659953    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:34.670336    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:34.670403    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:34.680886    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:34.680957    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:34.692216    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:34.692284    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:34.702737    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:34.702813    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:34.713077    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:34.713140    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:34.729328    4749 logs.go:276] 0 containers: []
	W0722 04:22:34.729343    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:34.729401    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:34.740508    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:34.740527    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:34.740533    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:34.751859    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:34.751872    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:34.764859    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:34.764869    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:34.790770    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:34.790778    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:34.802908    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:34.802917    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:34.816997    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:34.817008    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:34.821147    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:34.821153    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:34.854562    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:34.854573    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:34.866201    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:34.866212    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:34.905008    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:34.905021    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:34.919751    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:34.919762    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:34.941623    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:34.941635    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:34.952646    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:34.952657    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:34.977814    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:34.977829    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:35.000342    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:35.000354    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:35.014594    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:35.014603    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:35.025975    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:35.025988    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:37.541822    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:42.544118    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:42.544467    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:42.574531    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:42.574617    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:42.585111    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:42.585171    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:42.596463    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:42.596534    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:42.609427    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:42.609505    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:42.621155    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:42.621227    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:42.641440    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:42.641515    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:42.652161    4749 logs.go:276] 0 containers: []
	W0722 04:22:42.652174    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:42.652233    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:42.663886    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:42.663908    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:42.663914    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:42.682234    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:42.682243    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:42.701188    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:42.701201    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:42.705751    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:42.705765    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:42.744667    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:42.744676    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:42.763226    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:42.763233    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:42.775770    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:42.775781    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:42.789366    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:42.789379    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:42.803187    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:42.803200    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:42.844992    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:42.845015    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:42.861031    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:42.861044    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:42.884806    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:42.884818    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:42.901499    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:42.901509    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:42.926097    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:42.926110    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:42.940565    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:42.940576    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:42.966709    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:42.966721    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:42.977765    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:42.977777    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:45.491198    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:50.493721    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:50.493931    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:50.511841    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:50.511930    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:50.525206    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:50.525276    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:50.537557    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:50.537626    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:50.548019    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:50.548091    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:50.558870    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:50.558938    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:50.569315    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:50.569385    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:50.584148    4749 logs.go:276] 0 containers: []
	W0722 04:22:50.584164    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:50.584223    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:50.594845    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:50.594864    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:50.594870    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:50.607633    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:50.607646    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:50.622922    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:50.622934    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:50.661285    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:50.661296    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:50.680552    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:50.680566    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:50.695171    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:50.695186    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:50.706586    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:50.706598    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:50.720387    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:50.720398    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:50.745340    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:50.745352    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:50.766673    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:50.766684    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:50.779375    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:50.779389    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:50.790658    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:50.790672    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:22:50.808898    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:50.808912    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:50.833539    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:50.833552    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:50.844961    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:50.844977    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:50.848985    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:50.848991    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:50.883488    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:50.883503    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:53.406614    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:22:58.409138    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:22:58.409317    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:22:58.427165    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:22:58.427248    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:22:58.440194    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:22:58.440261    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:22:58.454766    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:22:58.454836    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:22:58.465533    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:22:58.465603    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:22:58.476376    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:22:58.476444    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:22:58.487331    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:22:58.487402    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:22:58.497457    4749 logs.go:276] 0 containers: []
	W0722 04:22:58.497468    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:22:58.497523    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:22:58.508271    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:22:58.508288    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:22:58.508293    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:22:58.521866    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:22:58.521876    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:22:58.535686    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:22:58.535696    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:22:58.547578    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:22:58.547588    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:22:58.585020    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:22:58.585031    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:22:58.619242    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:22:58.619254    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:22:58.631222    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:22:58.631234    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:22:58.643949    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:22:58.643960    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:22:58.656134    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:22:58.656145    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:22:58.673556    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:22:58.673570    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:22:58.685070    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:22:58.685081    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:22:58.708077    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:22:58.708085    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:22:58.712628    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:22:58.712637    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:22:58.737329    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:22:58.737340    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:22:58.764318    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:22:58.764329    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:22:58.778701    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:22:58.778716    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:22:58.789873    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:22:58.789884    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:01.305092    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:06.307427    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:06.307587    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:06.323492    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:06.323574    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:06.336276    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:06.336344    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:06.347816    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:06.347879    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:06.358466    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:06.358541    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:06.368891    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:06.368963    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:06.379475    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:06.379542    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:06.390098    4749 logs.go:276] 0 containers: []
	W0722 04:23:06.390110    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:06.390172    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:06.400350    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:06.400373    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:06.400378    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:06.413751    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:06.413761    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:06.428032    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:06.428044    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:06.440871    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:06.440882    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:06.462156    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:06.462171    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:06.473470    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:06.473483    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:06.486379    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:06.486391    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:06.504053    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:06.504065    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:06.517697    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:06.517708    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:06.556444    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:06.556453    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:06.560725    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:06.560733    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:06.575818    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:06.575831    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:06.600665    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:06.600674    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:06.635076    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:06.635088    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:06.661240    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:06.661250    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:06.672372    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:06.672383    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:06.683472    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:06.683483    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:09.199557    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:14.201792    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:14.201923    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:14.217469    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:14.217551    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:14.229377    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:14.229444    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:14.242617    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:14.242677    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:14.253063    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:14.253128    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:14.267906    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:14.267966    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:14.278439    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:14.278502    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:14.290643    4749 logs.go:276] 0 containers: []
	W0722 04:23:14.290656    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:14.290712    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:14.301725    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:14.301744    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:14.301750    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:14.315202    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:14.315212    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:14.339589    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:14.339601    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:14.352639    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:14.352652    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:14.369904    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:14.369914    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:14.386198    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:14.386212    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:14.397408    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:14.397420    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:14.409280    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:14.409291    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:14.448426    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:14.448435    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:14.452636    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:14.452643    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:14.463567    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:14.463580    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:14.488698    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:14.488713    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:14.522827    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:14.522841    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:14.536563    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:14.536574    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:14.550950    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:14.550961    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:14.563785    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:14.563795    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:14.588361    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:14.588372    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:17.102298    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:22.104496    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:22.104672    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:22.123883    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:22.123971    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:22.136626    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:22.136696    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:22.147862    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:22.147933    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:22.158377    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:22.158444    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:22.168713    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:22.168817    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:22.179565    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:22.179641    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:22.194576    4749 logs.go:276] 0 containers: []
	W0722 04:23:22.194588    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:22.194647    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:22.205442    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:22.205458    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:22.205463    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:22.229156    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:22.229167    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:22.242801    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:22.242811    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:22.263929    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:22.263939    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:22.279327    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:22.279340    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:22.290694    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:22.290705    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:22.295160    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:22.295168    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:22.309131    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:22.309144    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:22.323568    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:22.323579    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:22.335487    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:22.335497    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:22.346898    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:22.346912    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:22.371861    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:22.371868    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:22.408248    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:22.408263    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:22.423205    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:22.423224    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:22.435666    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:22.435679    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:22.476150    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:22.476162    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:22.494957    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:22.494968    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:25.009387    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:30.011584    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:30.011704    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:30.031397    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:30.031481    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:30.045556    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:30.045627    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:30.058509    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:30.058576    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:30.068803    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:30.068875    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:30.079594    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:30.079657    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:30.090980    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:30.091051    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:30.101165    4749 logs.go:276] 0 containers: []
	W0722 04:23:30.101174    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:30.101226    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:30.111745    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:30.111766    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:30.111773    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:30.129219    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:30.129232    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:30.141673    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:30.141684    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:30.181125    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:30.181136    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:30.217914    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:30.217926    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:30.229055    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:30.229067    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:30.233189    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:30.233196    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:30.247511    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:30.247523    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:30.262241    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:30.262253    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:30.273727    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:30.273738    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:30.286854    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:30.286865    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:30.298260    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:30.298271    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:30.318022    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:30.318035    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:30.329010    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:30.329019    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:30.342546    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:30.342557    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:30.367173    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:30.367185    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:30.391160    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:30.391170    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:32.916133    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:37.918360    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:37.918539    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:37.930783    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:37.930876    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:37.943934    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:37.944014    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:37.954429    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:37.954496    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:37.965427    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:37.965493    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:37.976093    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:37.976161    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:37.986827    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:37.986893    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:38.010467    4749 logs.go:276] 0 containers: []
	W0722 04:23:38.010483    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:38.010538    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:38.022198    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:38.022215    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:38.022220    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:38.047179    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:38.047187    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:38.051616    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:38.051624    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:38.066131    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:38.066141    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:38.077466    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:38.077476    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:38.098886    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:38.098897    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:38.112904    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:38.112914    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:38.124652    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:38.124662    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:38.136494    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:38.136507    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:38.162075    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:38.162086    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:38.178456    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:38.178466    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:38.191288    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:38.191299    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:38.231457    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:38.231466    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:38.252476    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:38.252486    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:38.269483    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:38.269494    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:38.305207    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:38.305218    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:38.316768    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:38.316779    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:40.835132    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:45.837424    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:45.837596    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:45.854206    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:45.854298    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:45.866978    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:45.867052    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:45.878019    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:45.878092    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:45.888506    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:45.888570    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:45.898750    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:45.898820    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:45.909865    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:45.909932    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:45.923841    4749 logs.go:276] 0 containers: []
	W0722 04:23:45.923852    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:45.923903    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:45.934578    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:45.934596    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:45.934601    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:45.973048    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:45.973058    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:45.977066    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:45.977075    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:45.991492    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:45.991505    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:46.009337    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:46.009347    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:46.020938    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:46.020948    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:46.032301    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:46.032312    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:46.068598    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:46.068609    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:46.094057    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:46.094073    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:46.112273    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:46.112285    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:46.133613    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:46.133625    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:46.146051    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:46.146062    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:46.164458    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:46.164468    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:46.178143    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:46.178154    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:46.189730    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:46.189743    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:46.202641    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:46.202651    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:46.227513    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:46.227521    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:48.741174    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:23:53.742141    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:23:53.742289    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:23:53.753308    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:23:53.753382    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:23:53.763640    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:23:53.763713    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:23:53.773660    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:23:53.773737    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:23:53.785485    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:23:53.785563    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:23:53.796325    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:23:53.796392    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:23:53.809250    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:23:53.809320    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:23:53.819398    4749 logs.go:276] 0 containers: []
	W0722 04:23:53.819408    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:23:53.819462    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:23:53.830026    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:23:53.830048    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:23:53.830054    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:23:53.834313    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:23:53.834323    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:23:53.848366    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:23:53.848379    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:23:53.868929    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:23:53.868943    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:23:53.887706    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:23:53.887717    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:23:53.898959    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:23:53.898974    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:23:53.921321    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:23:53.921331    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:23:53.935220    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:23:53.935230    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:23:53.947581    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:23:53.947591    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:23:53.963753    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:23:53.963763    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:23:53.975571    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:23:53.975581    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:23:54.015723    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:23:54.015739    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:23:54.055399    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:23:54.055412    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:23:54.084010    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:23:54.084031    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:23:54.105387    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:23:54.105398    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:23:54.119634    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:23:54.119646    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:23:54.132720    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:23:54.132732    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:23:56.652981    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:01.655264    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:01.655513    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:01.672838    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:01.672925    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:01.686308    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:01.686391    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:01.697462    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:01.697526    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:01.709732    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:01.709806    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:01.722232    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:01.722300    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:01.732928    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:01.732994    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:01.742980    4749 logs.go:276] 0 containers: []
	W0722 04:24:01.742992    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:01.743042    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:01.753642    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:01.753659    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:01.753665    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:01.788023    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:01.788034    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:01.801795    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:01.801805    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:01.817544    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:01.817555    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:01.840135    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:01.840145    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:01.857286    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:01.857299    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:01.870701    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:01.870714    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:01.881788    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:01.881803    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:01.893272    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:01.893283    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:01.917795    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:01.917804    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:01.929379    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:01.929390    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:01.952334    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:01.952342    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:01.956456    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:01.956463    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:01.970553    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:01.970564    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:02.007738    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:02.007750    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:02.024688    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:02.024698    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:02.036106    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:02.036118    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:04.549672    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:09.552034    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:09.552238    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:09.568753    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:09.568831    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:09.581458    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:09.581526    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:09.592685    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:09.592751    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:09.603102    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:09.603160    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:09.613083    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:09.613147    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:09.623736    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:09.623799    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:09.633883    4749 logs.go:276] 0 containers: []
	W0722 04:24:09.633897    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:09.633951    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:09.644648    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:09.644664    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:09.644670    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:09.667847    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:09.667857    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:09.684634    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:09.684647    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:09.702004    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:09.702014    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:09.741632    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:09.741641    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:09.746048    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:09.746054    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:09.758708    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:09.758723    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:09.770459    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:09.770469    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:09.795102    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:09.795113    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:09.813255    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:09.813265    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:09.824907    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:09.824917    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:09.845607    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:09.845618    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:09.857802    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:09.857812    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:09.871750    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:09.871759    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:09.882745    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:09.882755    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:09.894560    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:09.894571    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:09.928823    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:09.928834    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:12.449114    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:17.451293    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:17.451483    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:17.464179    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:17.464250    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:17.475103    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:17.475177    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:17.485688    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:17.485753    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:17.496441    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:17.496513    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:17.507511    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:17.507574    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:17.518653    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:17.518722    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:17.528830    4749 logs.go:276] 0 containers: []
	W0722 04:24:17.528840    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:17.528896    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:17.539518    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:17.539535    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:17.539540    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:17.567679    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:17.567691    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:17.579121    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:17.579130    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:17.590563    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:17.590574    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:17.630113    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:17.630120    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:17.665839    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:17.665850    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:17.678741    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:17.678751    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:17.700547    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:17.700561    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:17.714892    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:17.714903    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:17.726901    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:17.726912    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:17.741398    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:17.741408    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:17.755159    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:17.755170    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:17.759374    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:17.759380    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:17.774154    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:17.774166    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:17.791875    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:17.791886    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:17.804192    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:17.804205    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:17.826420    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:17.826429    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:20.340133    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:25.342269    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:25.342381    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:25.357838    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:25.357913    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:25.376291    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:25.376359    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:25.387244    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:25.387318    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:25.398203    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:25.398268    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:25.408560    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:25.408626    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:25.419244    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:25.419313    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:25.428856    4749 logs.go:276] 0 containers: []
	W0722 04:24:25.428875    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:25.428925    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:25.439464    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:25.439482    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:25.439487    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:25.452639    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:25.452651    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:25.464252    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:25.464263    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:25.476238    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:25.476254    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:25.480459    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:25.480467    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:25.494683    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:25.494695    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:25.521086    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:25.521096    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:25.532932    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:25.532943    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:25.544221    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:25.544232    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:25.582804    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:25.582813    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:25.597696    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:25.597706    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:25.608426    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:25.608439    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:25.622104    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:25.622115    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:25.657861    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:25.657878    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:25.685981    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:25.685992    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:25.700298    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:25.700308    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:25.722927    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:25.722938    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:28.249031    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:33.251332    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:33.251606    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:33.279846    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:33.279980    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:33.297984    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:33.298064    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:33.320108    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:33.320178    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:33.331058    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:33.331125    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:33.341721    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:33.341786    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:33.356099    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:33.356163    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:33.367357    4749 logs.go:276] 0 containers: []
	W0722 04:24:33.367370    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:33.367437    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:33.379102    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:33.379119    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:33.379125    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:33.391627    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:33.391643    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:33.402904    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:33.402915    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:33.426703    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:33.426710    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:33.451880    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:33.451891    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:33.471146    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:33.471158    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:33.483283    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:33.483297    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:33.521703    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:33.521711    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:33.535599    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:33.535609    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:33.549316    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:33.549330    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:33.560714    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:33.560725    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:33.578645    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:33.578659    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:33.592427    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:33.592439    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:33.596812    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:33.596818    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:33.610730    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:33.610746    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:33.631495    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:33.631510    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:33.644071    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:33.644085    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:36.181111    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:41.183351    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:41.183551    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:41.199456    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:41.199535    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:41.210927    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:41.210996    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:41.221250    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:41.221313    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:41.231590    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:41.231663    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:41.243324    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:41.243393    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:41.253988    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:41.254056    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:41.264120    4749 logs.go:276] 0 containers: []
	W0722 04:24:41.264132    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:41.264192    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:41.274763    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:41.274787    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:41.274793    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:41.308504    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:41.308517    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:41.333086    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:41.333096    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:41.349554    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:41.349568    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:41.378206    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:41.378219    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:41.405143    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:41.405159    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:41.425820    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:41.425835    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:41.430183    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:41.430190    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:41.449323    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:41.449334    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:41.462955    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:41.462967    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:41.475059    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:41.475071    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:41.514899    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:41.514913    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:41.532080    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:41.532091    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:41.544866    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:41.544876    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:41.559152    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:41.559163    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:41.580310    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:41.580321    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:41.604900    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:41.604911    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:44.118795    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:49.121231    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:49.121510    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:24:49.151311    4749 logs.go:276] 2 containers: [6f7819ffc2dd b242274d2995]
	I0722 04:24:49.151451    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:24:49.171052    4749 logs.go:276] 2 containers: [c1a3c1bc5e08 cdb2f02c95ca]
	I0722 04:24:49.171165    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:24:49.185718    4749 logs.go:276] 1 containers: [a11f092c49f3]
	I0722 04:24:49.185801    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:24:49.196976    4749 logs.go:276] 2 containers: [829d882a5dcf 9673cbf4cea7]
	I0722 04:24:49.197044    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:24:49.206877    4749 logs.go:276] 1 containers: [1be7d7e3405b]
	I0722 04:24:49.206958    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:24:49.217146    4749 logs.go:276] 2 containers: [b9a200dc8c73 107f02380e96]
	I0722 04:24:49.217231    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:24:49.230496    4749 logs.go:276] 0 containers: []
	W0722 04:24:49.230511    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:24:49.230583    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:24:49.240616    4749 logs.go:276] 2 containers: [ac2f27131054 3222ecbcbbb5]
	I0722 04:24:49.240634    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:24:49.240640    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:24:49.279609    4749 logs.go:123] Gathering logs for etcd [cdb2f02c95ca] ...
	I0722 04:24:49.279620    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdb2f02c95ca"
	I0722 04:24:49.293968    4749 logs.go:123] Gathering logs for kube-controller-manager [107f02380e96] ...
	I0722 04:24:49.293978    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 107f02380e96"
	I0722 04:24:49.307726    4749 logs.go:123] Gathering logs for kube-proxy [1be7d7e3405b] ...
	I0722 04:24:49.307735    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be7d7e3405b"
	I0722 04:24:49.322756    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:24:49.322767    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:24:49.347375    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:24:49.347388    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:24:49.360133    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:24:49.360144    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:24:49.394709    4749 logs.go:123] Gathering logs for coredns [a11f092c49f3] ...
	I0722 04:24:49.394721    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a11f092c49f3"
	I0722 04:24:49.405659    4749 logs.go:123] Gathering logs for kube-scheduler [829d882a5dcf] ...
	I0722 04:24:49.405673    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 829d882a5dcf"
	I0722 04:24:49.421980    4749 logs.go:123] Gathering logs for kube-controller-manager [b9a200dc8c73] ...
	I0722 04:24:49.421992    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9a200dc8c73"
	I0722 04:24:49.447628    4749 logs.go:123] Gathering logs for storage-provisioner [3222ecbcbbb5] ...
	I0722 04:24:49.447640    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3222ecbcbbb5"
	I0722 04:24:49.459349    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:24:49.459362    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:24:49.463780    4749 logs.go:123] Gathering logs for kube-apiserver [6f7819ffc2dd] ...
	I0722 04:24:49.463786    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f7819ffc2dd"
	I0722 04:24:49.478619    4749 logs.go:123] Gathering logs for kube-apiserver [b242274d2995] ...
	I0722 04:24:49.478631    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b242274d2995"
	I0722 04:24:49.503567    4749 logs.go:123] Gathering logs for etcd [c1a3c1bc5e08] ...
	I0722 04:24:49.503577    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c1a3c1bc5e08"
	I0722 04:24:49.517695    4749 logs.go:123] Gathering logs for kube-scheduler [9673cbf4cea7] ...
	I0722 04:24:49.517706    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9673cbf4cea7"
	I0722 04:24:49.540097    4749 logs.go:123] Gathering logs for storage-provisioner [ac2f27131054] ...
	I0722 04:24:49.540108    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac2f27131054"
	I0722 04:24:52.053897    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:24:57.056542    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:24:57.056626    4749 kubeadm.go:597] duration metric: took 4m3.741945958s to restartPrimaryControlPlane
	W0722 04:24:57.056704    4749 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0722 04:24:57.056741    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0722 04:24:58.087990    4749 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031253s)
	I0722 04:24:58.088046    4749 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:24:58.093112    4749 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:24:58.095900    4749 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:24:58.098615    4749 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0722 04:24:58.098621    4749 kubeadm.go:157] found existing configuration files:
	
	I0722 04:24:58.098642    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf
	I0722 04:24:58.101219    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0722 04:24:58.101239    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0722 04:24:58.104133    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf
	I0722 04:24:58.107280    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0722 04:24:58.107305    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0722 04:24:58.110136    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf
	I0722 04:24:58.112672    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0722 04:24:58.112692    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:24:58.115769    4749 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf
	I0722 04:24:58.118940    4749 kubeadm.go:163] "https://control-plane.minikube.internal:50463" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50463 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0722 04:24:58.118963    4749 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:24:58.121735    4749 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0722 04:24:58.139436    4749 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0722 04:24:58.139562    4749 kubeadm.go:310] [preflight] Running pre-flight checks
	I0722 04:24:58.192818    4749 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0722 04:24:58.192875    4749 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0722 04:24:58.192950    4749 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0722 04:24:58.243936    4749 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0722 04:24:58.249147    4749 out.go:204]   - Generating certificates and keys ...
	I0722 04:24:58.249181    4749 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0722 04:24:58.249212    4749 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0722 04:24:58.249258    4749 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0722 04:24:58.249316    4749 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0722 04:24:58.249353    4749 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0722 04:24:58.249379    4749 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0722 04:24:58.249412    4749 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0722 04:24:58.249443    4749 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0722 04:24:58.249486    4749 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0722 04:24:58.249541    4749 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0722 04:24:58.249565    4749 kubeadm.go:310] [certs] Using the existing "sa" key
	I0722 04:24:58.249593    4749 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0722 04:24:58.334827    4749 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0722 04:24:58.423619    4749 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0722 04:24:58.489988    4749 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0722 04:24:58.594929    4749 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0722 04:24:58.624767    4749 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0722 04:24:58.625147    4749 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0722 04:24:58.625209    4749 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0722 04:24:58.713841    4749 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0722 04:24:58.717546    4749 out.go:204]   - Booting up control plane ...
	I0722 04:24:58.717601    4749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0722 04:24:58.717683    4749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0722 04:24:58.717744    4749 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0722 04:24:58.719607    4749 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0722 04:24:58.720390    4749 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0722 04:25:03.222841    4749 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.502142 seconds
	I0722 04:25:03.222898    4749 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0722 04:25:03.226740    4749 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0722 04:25:03.740843    4749 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0722 04:25:03.741115    4749 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-239000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0722 04:25:04.246991    4749 kubeadm.go:310] [bootstrap-token] Using token: kimnev.ubkalfagcbm7tlf8
	I0722 04:25:04.252827    4749 out.go:204]   - Configuring RBAC rules ...
	I0722 04:25:04.252897    4749 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0722 04:25:04.252963    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0722 04:25:04.258737    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0722 04:25:04.259726    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0722 04:25:04.260534    4749 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0722 04:25:04.261348    4749 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0722 04:25:04.264518    4749 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0722 04:25:04.454911    4749 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0722 04:25:04.651468    4749 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0722 04:25:04.651999    4749 kubeadm.go:310] 
	I0722 04:25:04.652029    4749 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0722 04:25:04.652034    4749 kubeadm.go:310] 
	I0722 04:25:04.652094    4749 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0722 04:25:04.652100    4749 kubeadm.go:310] 
	I0722 04:25:04.652114    4749 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0722 04:25:04.652140    4749 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0722 04:25:04.652192    4749 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0722 04:25:04.652197    4749 kubeadm.go:310] 
	I0722 04:25:04.652227    4749 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0722 04:25:04.652230    4749 kubeadm.go:310] 
	I0722 04:25:04.652256    4749 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0722 04:25:04.652260    4749 kubeadm.go:310] 
	I0722 04:25:04.652286    4749 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0722 04:25:04.652335    4749 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0722 04:25:04.652380    4749 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0722 04:25:04.652383    4749 kubeadm.go:310] 
	I0722 04:25:04.652427    4749 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0722 04:25:04.652468    4749 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0722 04:25:04.652471    4749 kubeadm.go:310] 
	I0722 04:25:04.652522    4749 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kimnev.ubkalfagcbm7tlf8 \
	I0722 04:25:04.652576    4749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 \
	I0722 04:25:04.652588    4749 kubeadm.go:310] 	--control-plane 
	I0722 04:25:04.652591    4749 kubeadm.go:310] 
	I0722 04:25:04.652668    4749 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0722 04:25:04.652679    4749 kubeadm.go:310] 
	I0722 04:25:04.652718    4749 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kimnev.ubkalfagcbm7tlf8 \
	I0722 04:25:04.652776    4749 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e1f95f96cbafa48be8d9b2581ace651393ef041feb8f94ca3ac47ac6fd85c5e4 
	I0722 04:25:04.652847    4749 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0722 04:25:04.652857    4749 cni.go:84] Creating CNI manager for ""
	I0722 04:25:04.652867    4749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:25:04.656603    4749 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 04:25:04.664577    4749 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 04:25:04.667824    4749 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 04:25:04.672648    4749 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 04:25:04.672687    4749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0722 04:25:04.672697    4749 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-239000 minikube.k8s.io/updated_at=2024_07_22T04_25_04_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7 minikube.k8s.io/name=stopped-upgrade-239000 minikube.k8s.io/primary=true
	I0722 04:25:04.721269    4749 ops.go:34] apiserver oom_adj: -16
	I0722 04:25:04.721305    4749 kubeadm.go:1113] duration metric: took 48.653667ms to wait for elevateKubeSystemPrivileges
	I0722 04:25:04.721317    4749 kubeadm.go:394] duration metric: took 4m11.420412583s to StartCluster
	I0722 04:25:04.721327    4749 settings.go:142] acquiring lock: {Name:mk640939e683dda0ffda5b348284f38e73fbc066 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:25:04.721417    4749 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:25:04.721843    4749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/kubeconfig: {Name:mkb5cae8b3f3a2ff5a3e393f1e4daf97762f1a5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:25:04.722060    4749 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:25:04.722071    4749 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 04:25:04.722104    4749 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-239000"
	I0722 04:25:04.722119    4749 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-239000"
	W0722 04:25:04.722122    4749 addons.go:243] addon storage-provisioner should already be in state true
	I0722 04:25:04.722137    4749 host.go:66] Checking if "stopped-upgrade-239000" exists ...
	I0722 04:25:04.722140    4749 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-239000"
	I0722 04:25:04.722155    4749 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-239000"
	I0722 04:25:04.722156    4749 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:25:04.723243    4749 kapi.go:59] client config for stopped-upgrade-239000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/stopped-upgrade-239000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1127/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x101fef790), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:25:04.723375    4749 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-239000"
	W0722 04:25:04.723380    4749 addons.go:243] addon default-storageclass should already be in state true
	I0722 04:25:04.723390    4749 host.go:66] Checking if "stopped-upgrade-239000" exists ...
	I0722 04:25:04.726507    4749 out.go:177] * Verifying Kubernetes components...
	I0722 04:25:04.726955    4749 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 04:25:04.729725    4749 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 04:25:04.729733    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:25:04.733494    4749 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0722 04:25:04.737518    4749 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:25:04.741536    4749 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:25:04.741542    4749 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0722 04:25:04.741548    4749 sshutil.go:53] new ssh client: &{IP:localhost Port:50430 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/stopped-upgrade-239000/id_rsa Username:docker}
	I0722 04:25:04.824635    4749 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:25:04.830321    4749 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:25:04.830365    4749 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:25:04.834007    4749 api_server.go:72] duration metric: took 111.937ms to wait for apiserver process to appear ...
	I0722 04:25:04.834014    4749 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:25:04.834021    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:04.842367    4749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0722 04:25:04.875483    4749 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0722 04:25:09.836078    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:09.836114    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:14.836438    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:14.836478    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:19.836749    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:19.836780    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:24.837237    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:24.837277    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:29.837837    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:29.837872    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:34.838642    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:34.838679    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0722 04:25:35.196096    4749 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0722 04:25:35.200393    4749 out.go:177] * Enabled addons: storage-provisioner
	I0722 04:25:35.208336    4749 addons.go:510] duration metric: took 30.486757708s for enable addons: enabled=[storage-provisioner]
	I0722 04:25:39.839717    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:39.839770    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:44.841200    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:44.841244    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:49.842075    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:49.842136    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:54.844237    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:54.844291    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:25:59.846423    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:25:59.846472    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:04.848627    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:04.848785    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:04.859371    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:04.859445    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:04.869679    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:04.869746    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:04.880080    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:04.880146    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:04.890037    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:04.890109    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:04.907074    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:04.907139    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:04.917313    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:04.917383    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:04.928615    4749 logs.go:276] 0 containers: []
	W0722 04:26:04.928625    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:04.928686    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:04.939473    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:04.939488    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:26:04.939495    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:26:04.951061    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:04.951073    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:04.962817    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:26:04.962829    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:26:04.980435    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:04.980446    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:04.991790    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:26:04.991803    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:26:05.003889    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:05.003899    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:05.040234    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:26:05.040249    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:26:05.074503    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:26:05.074515    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:26:05.091991    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:05.092003    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:05.106293    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:05.106307    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:26:05.131141    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:26:05.131153    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:26:05.135373    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:26:05.135379    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:26:05.150485    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:26:05.150498    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:26:07.663832    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:12.666086    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:12.666220    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:12.678094    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:12.678166    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:12.688598    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:12.688673    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:12.701284    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:12.701356    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:12.712551    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:12.712615    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:12.723210    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:12.723280    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:12.734297    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:12.734362    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:12.747685    4749 logs.go:276] 0 containers: []
	W0722 04:26:12.747696    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:12.747749    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:12.757683    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:12.757698    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:26:12.757705    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:26:12.775433    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:26:12.775444    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:26:12.786756    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:26:12.786767    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:26:12.790943    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:26:12.790951    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:26:12.827013    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:26:12.827027    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:26:12.840233    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:12.840249    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:12.855401    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:12.855412    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:12.867153    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:12.867162    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:12.879004    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:12.879019    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:26:12.903416    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:12.903423    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:12.940484    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:26:12.940498    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:26:12.954894    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:26:12.954909    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:26:12.969148    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:26:12.969159    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:26:15.483041    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:20.485295    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:20.485743    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:20.534692    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:20.534820    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:20.554265    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:20.554352    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:20.567857    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:20.567930    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:20.579354    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:20.579417    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:20.589985    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:20.590055    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:20.600636    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:20.600704    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:20.610524    4749 logs.go:276] 0 containers: []
	W0722 04:26:20.610534    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:20.610587    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:20.621133    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:20.621150    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:20.621155    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:20.632618    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:26:20.632631    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:26:20.649872    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:20.649882    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:26:20.674695    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:26:20.674705    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:26:20.678903    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:26:20.678911    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:26:20.690710    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:26:20.690723    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:26:20.705226    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:26:20.705237    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:26:20.719799    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:26:20.719812    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:26:20.731329    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:20.731340    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:20.746496    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:20.746507    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:20.758491    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:26:20.758505    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:26:20.770243    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:20.770255    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:20.807992    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:26:20.808001    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:26:23.346048    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:28.348529    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:28.348923    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:28.388954    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:28.389085    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:28.409900    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:28.410010    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:28.427336    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:28.427415    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:28.443519    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:28.443581    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:28.453703    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:28.453770    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:28.464579    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:28.464650    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:28.474674    4749 logs.go:276] 0 containers: []
	W0722 04:26:28.474683    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:28.474734    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:28.485621    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:28.485641    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:26:28.485647    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:26:28.489851    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:26:28.489857    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:26:28.504563    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:26:28.504572    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:26:28.518701    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:26:28.518711    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:26:28.531654    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:28.531668    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:28.547264    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:28.547275    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:28.558680    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:26:28.558691    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:26:28.577058    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:28.577070    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:28.614358    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:28.614368    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:28.625727    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:26:28.625739    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:26:28.636952    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:28.636965    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:26:28.660300    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:26:28.660309    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:26:28.671379    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:26:28.671393    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:26:31.213872    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:36.215636    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:36.216102    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:36.255764    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:36.255901    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:36.281011    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:36.281118    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:36.295535    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:36.295608    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:36.307896    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:36.307973    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:36.318523    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:36.318591    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:36.328897    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:36.328962    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:36.339313    4749 logs.go:276] 0 containers: []
	W0722 04:26:36.339324    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:36.339376    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:36.349987    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:36.350002    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:36.350006    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:26:36.373337    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:26:36.373345    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:26:36.387523    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:26:36.387538    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:26:36.401649    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:26:36.401661    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:26:36.413565    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:26:36.413575    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:26:36.431589    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:36.431602    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:36.443353    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:36.443366    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:36.456628    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:26:36.456639    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:26:36.468329    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:36.468342    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:36.503961    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:26:36.503970    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:26:36.507683    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:26:36.507693    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:26:36.545452    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:26:36.545465    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:26:36.557339    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:36.557372    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:39.075977    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:44.078141    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:44.078363    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:44.101476    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:44.101596    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:44.116590    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:44.116668    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:44.129254    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:44.129320    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:44.141559    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:44.141618    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:44.151592    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:44.151662    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:44.162101    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:44.162165    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:44.172004    4749 logs.go:276] 0 containers: []
	W0722 04:26:44.172013    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:44.172068    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:44.183039    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:44.183053    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:26:44.183058    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:26:44.201031    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:44.201042    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:44.212626    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:26:44.212636    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:26:44.217224    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:26:44.217233    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:26:44.255953    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:26:44.255962    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:26:44.270054    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:26:44.270069    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:26:44.284017    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:26:44.284036    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:26:44.296985    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:44.296996    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:44.316854    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:44.316874    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:26:44.342637    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:26:44.342651    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:26:44.355190    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:44.355199    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:44.391539    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:26:44.391549    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:26:44.405370    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:44.405381    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:46.919321    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:51.922009    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:51.922469    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:51.962119    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:51.962247    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:51.983070    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:51.983169    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:51.998418    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:51.998498    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:52.012200    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:52.012265    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:52.023047    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:52.023107    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:52.033778    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:52.033848    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:52.051246    4749 logs.go:276] 0 containers: []
	W0722 04:26:52.051257    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:52.051311    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:52.061540    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:52.061553    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:26:52.061558    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:26:52.079622    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:26:52.079633    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:26:52.097600    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:52.097608    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:52.112420    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:52.112431    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:52.124474    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:52.124487    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:26:52.147757    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:26:52.147766    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:26:52.159173    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:52.159184    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:52.170984    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:52.170994    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:52.207609    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:26:52.207620    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:26:52.211574    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:26:52.211580    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:26:52.246934    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:26:52.246948    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:26:52.258328    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:26:52.258342    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:26:52.269727    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:26:52.269739    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:26:54.788924    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:26:59.790720    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:26:59.791143    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:26:59.827229    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:26:59.827366    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:26:59.847978    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:26:59.848073    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:26:59.862008    4749 logs.go:276] 2 containers: [e1fc095d9e3c e5643cdb93d0]
	I0722 04:26:59.862085    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:26:59.874147    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:26:59.874218    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:26:59.885179    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:26:59.885254    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:26:59.895799    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:26:59.895866    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:26:59.906106    4749 logs.go:276] 0 containers: []
	W0722 04:26:59.906119    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:26:59.906177    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:26:59.916600    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:26:59.916615    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:26:59.916620    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:26:59.952872    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:26:59.952882    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:26:59.967562    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:26:59.967571    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:26:59.979409    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:26:59.979423    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:26:59.999168    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:26:59.999181    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:00.023672    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:00.023686    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:00.035712    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:00.035725    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:00.053306    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:00.053320    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:00.065204    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:00.065218    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:00.070030    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:00.070036    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:00.105494    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:00.105506    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:00.120924    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:00.120937    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:00.134825    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:00.134838    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:02.649610    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:27:07.651076    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:27:07.651453    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:27:07.687117    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:27:07.687246    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:27:07.717049    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:27:07.717138    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:27:07.730650    4749 logs.go:276] 3 containers: [364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:27:07.730723    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:27:07.741655    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:27:07.741723    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:27:07.752360    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:27:07.752425    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:27:07.762668    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:27:07.762756    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:27:07.772740    4749 logs.go:276] 0 containers: []
	W0722 04:27:07.772753    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:27:07.772806    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:27:07.786571    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:27:07.786589    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:07.786596    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:07.821405    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:07.821418    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:07.835841    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:27:07.835854    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:27:07.847410    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:07.847423    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:07.858768    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:07.858780    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:07.871368    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:27:07.871380    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:27:07.884130    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:07.884143    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:07.895936    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:27:07.895949    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:27:07.931651    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:07.931659    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:07.935757    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:07.935764    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:07.953527    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:27:07.953539    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:27:07.965149    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:27:07.965161    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:07.990421    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:07.990430    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:08.003986    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:27:08.003995    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:27:10.521264    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:27:15.523895    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:27:15.524082    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:27:15.535747    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:27:15.535816    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:27:15.546418    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:27:15.546483    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:27:15.557195    4749 logs.go:276] 3 containers: [364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:27:15.557257    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:27:15.568027    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:27:15.568098    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:27:15.578292    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:27:15.578361    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:27:15.591942    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:27:15.592000    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:27:15.602379    4749 logs.go:276] 0 containers: []
	W0722 04:27:15.602392    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:27:15.602441    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:27:15.612901    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:27:15.612916    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:15.612921    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:15.647671    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:15.647684    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:15.659928    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:27:15.659942    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:27:15.676809    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:15.676822    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:15.694878    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:27:15.694890    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:27:15.731241    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:15.731248    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:15.735083    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:15.735092    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:15.748869    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:27:15.748881    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:27:15.759928    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:15.759938    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:15.771350    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:27:15.771359    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:15.794724    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:15.794732    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:15.810944    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:15.810956    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:15.822324    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:27:15.822335    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:27:15.836905    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:27:15.836918    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:27:18.353987    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:27:23.355017    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:27:23.355368    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:27:23.395895    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:27:23.396016    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:27:23.418218    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:27:23.418320    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:27:23.433849    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:27:23.433925    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:27:23.446108    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:27:23.446173    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:27:23.457009    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:27:23.457071    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:27:23.467862    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:27:23.467930    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:27:23.478442    4749 logs.go:276] 0 containers: []
	W0722 04:27:23.478452    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:27:23.478505    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:27:23.489135    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:27:23.489158    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:27:23.489163    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:27:23.501159    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:23.501171    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:23.505713    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:23.505721    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:23.517749    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:23.517761    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:23.529931    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:27:23.529940    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:27:23.542923    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:27:23.542934    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:27:23.557057    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:23.557069    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:23.569137    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:23.569149    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:23.584101    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:23.584115    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:23.601832    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:27:23.601843    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:23.626769    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:27:23.626777    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:27:23.663774    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:23.663783    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:23.699114    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:23.699125    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:23.713858    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:27:23.713868    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:27:23.727703    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:27:23.727716    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:27:26.244804    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:27:31.247126    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:27:31.247449    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:27:31.283318    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:27:31.283440    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:27:31.302382    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:27:31.302469    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:27:31.316708    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:27:31.316780    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:27:31.334123    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:27:31.334191    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:27:31.344905    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:27:31.344975    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:27:31.355119    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:27:31.355176    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:27:31.365029    4749 logs.go:276] 0 containers: []
	W0722 04:27:31.365039    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:27:31.365088    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:27:31.375113    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:27:31.375130    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:27:31.375135    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:27:31.386928    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:31.386937    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:31.400771    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:31.400782    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:31.413595    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:27:31.413608    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:27:31.428630    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:27:31.428638    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:27:31.440140    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:31.440151    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:31.452597    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:27:31.452611    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:27:31.490523    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:31.490531    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:31.511943    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:31.511953    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:31.526393    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:31.526406    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:31.540046    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:27:31.540059    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:27:31.551851    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:31.551863    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:31.556749    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:31.556757    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:31.593325    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:27:31.593337    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:27:31.605902    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:27:31.605914    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:34.132524    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:27:39.134985    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:27:39.135438    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:27:39.183558    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:27:39.183669    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:27:39.203462    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:27:39.203542    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:27:39.220175    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:27:39.220266    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:27:39.232728    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:27:39.232791    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:27:39.246873    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:27:39.246940    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:27:39.257844    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:27:39.257917    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:27:39.268488    4749 logs.go:276] 0 containers: []
	W0722 04:27:39.268498    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:27:39.268541    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:27:39.280944    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:27:39.280960    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:39.280965    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:39.293772    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:27:39.293785    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:27:39.312377    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:39.312388    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:39.330612    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:27:39.330624    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:27:39.347231    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:27:39.347243    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:27:39.360047    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:27:39.360061    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:39.386122    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:39.386142    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:39.399496    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:39.399511    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:39.414542    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:39.414558    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:39.418911    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:39.418922    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:39.456277    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:39.456292    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:39.472105    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:27:39.472117    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:27:39.484818    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:27:39.484832    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:27:39.497227    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:39.497240    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:39.510140    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:27:39.510155    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:27:42.050108    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:27:47.051827    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:27:47.052301    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:27:47.093530    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:27:47.093651    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:27:47.115320    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:27:47.115434    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:27:47.130982    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:27:47.131056    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:27:47.143475    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:27:47.143540    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:27:47.154850    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:27:47.154919    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:27:47.165259    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:27:47.165327    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:27:47.175158    4749 logs.go:276] 0 containers: []
	W0722 04:27:47.175170    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:27:47.175225    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:27:47.185430    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:27:47.185447    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:47.185452    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:47.199479    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:47.199491    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:47.215167    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:27:47.215180    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:27:47.229753    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:27:47.229762    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:27:47.241807    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:47.241820    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:47.246106    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:47.246112    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:47.261430    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:27:47.261442    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:47.289434    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:27:47.289444    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:27:47.307694    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:27:47.307705    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:27:47.319441    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:47.319452    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:47.331473    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:47.331485    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:47.366527    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:47.366537    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:47.384593    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:47.384608    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:47.402717    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:27:47.402730    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:27:47.438236    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:27:47.438243    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:27:49.951792    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:27:54.953939    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:27:54.954182    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:27:54.982742    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:27:54.982846    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:27:55.000344    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:27:55.000420    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:27:55.015593    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:27:55.015665    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:27:55.026689    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:27:55.026755    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:27:55.036899    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:27:55.036963    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:27:55.047841    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:27:55.047904    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:27:55.057982    4749 logs.go:276] 0 containers: []
	W0722 04:27:55.057993    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:27:55.058049    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:27:55.068314    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:27:55.068332    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:27:55.068338    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:27:55.079949    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:27:55.079962    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:27:55.116064    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:27:55.116073    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:27:55.119999    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:27:55.120005    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:27:55.134173    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:27:55.134184    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:27:55.146112    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:27:55.146127    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:27:55.161243    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:27:55.161252    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:27:55.195193    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:27:55.195204    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:27:55.207040    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:27:55.207051    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:27:55.221195    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:27:55.221207    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:27:55.239390    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:27:55.239400    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:27:55.251290    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:27:55.251300    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:27:55.276578    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:27:55.276586    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:27:55.292591    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:27:55.292601    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:27:55.303731    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:27:55.303741    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:27:57.817941    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:02.820739    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:02.821176    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:02.858050    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:02.858182    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:02.878938    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:02.879051    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:02.896434    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:28:02.896508    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:02.909345    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:02.909419    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:02.920285    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:02.920350    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:02.934358    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:02.934426    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:02.944826    4749 logs.go:276] 0 containers: []
	W0722 04:28:02.944836    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:02.944899    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:02.955277    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:02.955296    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:02.955302    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:02.972518    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:02.972529    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:02.984450    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:02.984459    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:02.996369    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:02.996380    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:03.000683    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:03.000691    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:03.034255    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:03.034264    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:03.046090    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:03.046101    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:03.059690    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:28:03.059701    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:28:03.071550    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:03.071564    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:03.097079    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:03.097086    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:03.121744    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:03.121753    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:03.160420    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:03.160441    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:03.177759    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:03.177775    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:28:03.193227    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:03.193240    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:03.207308    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:03.207319    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:05.726039    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:10.728220    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:10.728648    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:10.767378    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:10.767495    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:10.787920    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:10.788007    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:10.802728    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:28:10.802807    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:10.814950    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:10.815013    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:10.826292    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:10.826365    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:10.836802    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:10.836864    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:10.846832    4749 logs.go:276] 0 containers: []
	W0722 04:28:10.846844    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:10.846892    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:10.857587    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:10.857605    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:10.857610    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:10.894721    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:10.894731    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:10.933576    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:10.933588    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:10.950233    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:10.950245    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:10.973889    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:10.973899    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:10.985510    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:28:10.985520    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:28:10.997484    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:10.997496    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:11.009461    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:11.009470    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:11.026370    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:11.026381    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:11.031056    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:11.031064    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:11.045444    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:11.045456    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:11.066663    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:11.066675    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:28:11.080915    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:11.080927    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:11.092366    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:11.092380    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:11.104026    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:11.104040    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:13.617560    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:18.619876    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:18.620273    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:18.648939    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:18.649055    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:18.673929    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:18.674001    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:18.687494    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:28:18.687562    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:18.698865    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:18.698935    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:18.709959    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:18.710021    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:18.720295    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:18.720362    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:18.730544    4749 logs.go:276] 0 containers: []
	W0722 04:28:18.730555    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:18.730607    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:18.741350    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:18.741368    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:18.741373    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:18.776662    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:18.776672    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:18.790874    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:28:18.790886    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:28:18.803281    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:18.803295    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:18.814530    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:18.814539    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:28:18.828772    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:18.828782    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:18.845422    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:18.845434    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:18.868543    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:18.868550    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:18.872526    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:18.872533    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:18.890368    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:18.890381    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:18.902083    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:18.902097    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:18.913306    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:18.913316    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:18.950653    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:18.950665    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:18.963385    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:18.963396    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:18.975661    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:18.975676    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:21.503220    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:26.506014    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:26.506241    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:26.540421    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:26.540532    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:26.557545    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:26.557622    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:26.571362    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:28:26.571434    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:26.582636    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:26.582695    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:26.594230    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:26.594288    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:26.604968    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:26.605028    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:26.615120    4749 logs.go:276] 0 containers: []
	W0722 04:28:26.615130    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:26.615179    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:26.625552    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:26.625569    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:26.625575    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:26.660725    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:28:26.660737    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:28:26.672634    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:26.672647    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:26.697354    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:26.697361    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:26.714865    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:26.714875    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:26.750729    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:26.750741    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:26.755309    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:26.755316    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:26.766754    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:26.766763    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:26.781478    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:26.781490    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:26.792980    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:26.792993    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:26.806944    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:26.806957    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:28:26.820782    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:26.820793    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:26.832618    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:26.832629    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:26.844012    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:26.844022    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:26.855620    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:26.855634    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:29.368936    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:34.370308    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:34.371010    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:34.409812    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:34.409936    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:34.432299    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:34.432405    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:34.448279    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:28:34.448358    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:34.461006    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:34.461080    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:34.478097    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:34.478165    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:34.489007    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:34.489069    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:34.499545    4749 logs.go:276] 0 containers: []
	W0722 04:28:34.499560    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:34.499609    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:34.510216    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:34.510238    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:34.510243    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:28:34.524121    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:34.524132    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:34.536473    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:34.536485    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:34.554521    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:34.554530    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:34.563601    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:34.563612    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:34.598115    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:34.598124    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:34.610083    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:34.610095    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:34.622207    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:34.622221    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:34.634062    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:28:34.634073    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:28:34.645988    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:34.646001    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:34.681474    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:34.681482    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:34.695072    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:34.695085    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:34.706860    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:34.706873    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:34.718496    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:34.718507    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:34.733581    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:34.733593    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:37.258846    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:42.261016    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:42.261418    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:42.276852    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:42.276919    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:42.289402    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:42.289473    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:42.299840    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:28:42.299908    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:42.310380    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:42.310446    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:42.320757    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:42.320823    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:42.331212    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:42.331275    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:42.341030    4749 logs.go:276] 0 containers: []
	W0722 04:28:42.341042    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:42.341090    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:42.351037    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:42.351051    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:42.351055    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:42.366922    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:42.366935    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:42.382401    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:42.382412    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:42.394160    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:28:42.394174    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:28:42.406230    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:42.406239    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:42.431001    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:42.431011    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:42.468573    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:42.468580    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:42.472489    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:42.472495    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:42.483801    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:42.483810    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:42.519024    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:42.519035    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:28:42.535466    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:42.535478    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:42.552797    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:42.552809    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:42.564700    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:42.564711    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:42.575967    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:42.575977    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:42.587330    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:42.587341    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:45.099839    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:50.102277    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:50.102509    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:50.123856    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:50.123968    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:50.139310    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:50.139381    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:50.151314    4749 logs.go:276] 4 containers: [b09ba3d8b400 364b5674e735 e1fc095d9e3c e5643cdb93d0]
	I0722 04:28:50.151389    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:50.162105    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:50.162170    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:50.172263    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:50.172332    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:50.182729    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:50.182783    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:50.192836    4749 logs.go:276] 0 containers: []
	W0722 04:28:50.192847    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:50.192893    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:50.203413    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:50.203430    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:50.203434    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:50.221443    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:50.221452    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:50.249482    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:50.249491    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:50.287288    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:50.287298    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:50.291533    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:50.291539    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:50.302955    4749 logs.go:123] Gathering logs for coredns [e5643cdb93d0] ...
	I0722 04:28:50.302967    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e5643cdb93d0"
	I0722 04:28:50.314633    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:50.314644    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:50.325610    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:50.325619    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:50.361006    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:50.361016    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:50.379610    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:50.379621    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:50.398459    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:50.398471    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:28:50.412275    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:50.412285    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:50.423496    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:50.423507    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:50.446158    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:50.446167    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:50.457614    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:50.457625    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:52.971189    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:28:57.973414    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:28:57.973801    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0722 04:28:58.009249    4749 logs.go:276] 1 containers: [719c046675d2]
	I0722 04:28:58.009372    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0722 04:28:58.029681    4749 logs.go:276] 1 containers: [f6f88a4e9479]
	I0722 04:28:58.029775    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0722 04:28:58.043808    4749 logs.go:276] 4 containers: [2223b8137a41 b09ba3d8b400 364b5674e735 e1fc095d9e3c]
	I0722 04:28:58.043888    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0722 04:28:58.055366    4749 logs.go:276] 1 containers: [82090421e099]
	I0722 04:28:58.055437    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0722 04:28:58.065902    4749 logs.go:276] 1 containers: [d876da6f0d58]
	I0722 04:28:58.065969    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0722 04:28:58.076048    4749 logs.go:276] 1 containers: [329551c0b44a]
	I0722 04:28:58.076106    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0722 04:28:58.087270    4749 logs.go:276] 0 containers: []
	W0722 04:28:58.087280    4749 logs.go:278] No container was found matching "kindnet"
	I0722 04:28:58.087328    4749 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0722 04:28:58.097416    4749 logs.go:276] 1 containers: [49094829b3ea]
	I0722 04:28:58.097431    4749 logs.go:123] Gathering logs for storage-provisioner [49094829b3ea] ...
	I0722 04:28:58.097440    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 49094829b3ea"
	I0722 04:28:58.108747    4749 logs.go:123] Gathering logs for kube-apiserver [719c046675d2] ...
	I0722 04:28:58.108758    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 719c046675d2"
	I0722 04:28:58.122817    4749 logs.go:123] Gathering logs for coredns [b09ba3d8b400] ...
	I0722 04:28:58.122828    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b09ba3d8b400"
	I0722 04:28:58.134925    4749 logs.go:123] Gathering logs for kube-scheduler [82090421e099] ...
	I0722 04:28:58.134937    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 82090421e099"
	I0722 04:28:58.149625    4749 logs.go:123] Gathering logs for container status ...
	I0722 04:28:58.149635    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0722 04:28:58.161663    4749 logs.go:123] Gathering logs for coredns [2223b8137a41] ...
	I0722 04:28:58.161673    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2223b8137a41"
	I0722 04:28:58.177180    4749 logs.go:123] Gathering logs for coredns [364b5674e735] ...
	I0722 04:28:58.177193    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 364b5674e735"
	I0722 04:28:58.188667    4749 logs.go:123] Gathering logs for coredns [e1fc095d9e3c] ...
	I0722 04:28:58.188682    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e1fc095d9e3c"
	I0722 04:28:58.200274    4749 logs.go:123] Gathering logs for kube-proxy [d876da6f0d58] ...
	I0722 04:28:58.200283    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d876da6f0d58"
	I0722 04:28:58.212235    4749 logs.go:123] Gathering logs for kube-controller-manager [329551c0b44a] ...
	I0722 04:28:58.212246    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 329551c0b44a"
	I0722 04:28:58.229515    4749 logs.go:123] Gathering logs for Docker ...
	I0722 04:28:58.229524    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0722 04:28:58.254570    4749 logs.go:123] Gathering logs for dmesg ...
	I0722 04:28:58.254579    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0722 04:28:58.258857    4749 logs.go:123] Gathering logs for describe nodes ...
	I0722 04:28:58.258866    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0722 04:28:58.292461    4749 logs.go:123] Gathering logs for kubelet ...
	I0722 04:28:58.292473    4749 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0722 04:28:58.329579    4749 logs.go:123] Gathering logs for etcd [f6f88a4e9479] ...
	I0722 04:28:58.329588    4749 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f6f88a4e9479"
	I0722 04:29:00.845581    4749 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0722 04:29:05.848192    4749 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0722 04:29:05.862400    4749 out.go:177] 
	W0722 04:29:05.866773    4749 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0722 04:29:05.866778    4749 out.go:239] * 
	* 
	W0722 04:29:05.867181    4749 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:05.878598    4749 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-239000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (579.23s)

                                                
                                    
x
+
TestPause/serial/Start (9.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-002000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-002000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.73715275s)

                                                
                                                
-- stdout --
	* [pause-002000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-002000" primary control-plane node in "pause-002000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-002000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-002000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-002000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-002000 -n pause-002000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-002000 -n pause-002000: exit status 7 (42.744417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-002000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-179000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-179000 --driver=qemu2 : exit status 80 (9.868308s)

                                                
                                                
-- stdout --
	* [NoKubernetes-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-179000" primary control-plane node in "NoKubernetes-179000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-179000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-179000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-179000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000: exit status 7 (59.0295ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-179000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --driver=qemu2 : exit status 80 (5.239739708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-179000
	* Restarting existing qemu2 VM for "NoKubernetes-179000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-179000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-179000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000: exit status 7 (31.319167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-179000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --driver=qemu2 : exit status 80 (5.244421792s)

                                                
                                                
-- stdout --
	* [NoKubernetes-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-179000
	* Restarting existing qemu2 VM for "NoKubernetes-179000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-179000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-179000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000: exit status 7 (54.728792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-179000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-179000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-179000 --driver=qemu2 : exit status 80 (5.259642667s)

                                                
                                                
-- stdout --
	* [NoKubernetes-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-179000
	* Restarting existing qemu2 VM for "NoKubernetes-179000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-179000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-179000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-179000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-179000 -n NoKubernetes-179000: exit status 7 (62.081209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-179000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
E0722 04:27:31.148820    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.733699125s)

                                                
                                                
-- stdout --
	* [auto-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-055000" primary control-plane node in "auto-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:27:29.493002    5100 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:27:29.493136    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:27:29.493143    5100 out.go:304] Setting ErrFile to fd 2...
	I0722 04:27:29.493145    5100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:27:29.493276    5100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:27:29.494309    5100 out.go:298] Setting JSON to false
	I0722 04:27:29.510805    5100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5218,"bootTime":1721642431,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:27:29.510879    5100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:27:29.516904    5100 out.go:177] * [auto-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:27:29.524906    5100 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:27:29.524941    5100 notify.go:220] Checking for updates...
	I0722 04:27:29.532826    5100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:27:29.535902    5100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:27:29.539846    5100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:27:29.542884    5100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:27:29.544198    5100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:27:29.547119    5100 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:27:29.547189    5100 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:27:29.547232    5100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:27:29.549873    5100 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:27:29.554924    5100 start.go:297] selected driver: qemu2
	I0722 04:27:29.554930    5100 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:27:29.554936    5100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:27:29.557488    5100 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:27:29.561908    5100 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:27:29.563322    5100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:27:29.563360    5100 cni.go:84] Creating CNI manager for ""
	I0722 04:27:29.563370    5100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:27:29.563374    5100 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:27:29.563414    5100 start.go:340] cluster config:
	{Name:auto-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:auto-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:27:29.567416    5100 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:27:29.571915    5100 out.go:177] * Starting "auto-055000" primary control-plane node in "auto-055000" cluster
	I0722 04:27:29.575900    5100 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:27:29.575951    5100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:27:29.575968    5100 cache.go:56] Caching tarball of preloaded images
	I0722 04:27:29.576072    5100 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:27:29.576079    5100 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:27:29.576139    5100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/auto-055000/config.json ...
	I0722 04:27:29.576154    5100 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/auto-055000/config.json: {Name:mka1df40bf33450d5159970b9b8696fb27011741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:27:29.576451    5100 start.go:360] acquireMachinesLock for auto-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:27:29.576485    5100 start.go:364] duration metric: took 27.625µs to acquireMachinesLock for "auto-055000"
	I0722 04:27:29.576496    5100 start.go:93] Provisioning new machine with config: &{Name:auto-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:27:29.576546    5100 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:27:29.583934    5100 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:27:29.600364    5100 start.go:159] libmachine.API.Create for "auto-055000" (driver="qemu2")
	I0722 04:27:29.600401    5100 client.go:168] LocalClient.Create starting
	I0722 04:27:29.600469    5100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:27:29.600500    5100 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:29.600509    5100 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:29.600553    5100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:27:29.600575    5100 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:29.600582    5100 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:29.601044    5100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:27:29.738739    5100 main.go:141] libmachine: Creating SSH key...
	I0722 04:27:29.805683    5100 main.go:141] libmachine: Creating Disk image...
	I0722 04:27:29.805692    5100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:27:29.805929    5100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2
	I0722 04:27:29.815094    5100 main.go:141] libmachine: STDOUT: 
	I0722 04:27:29.815114    5100 main.go:141] libmachine: STDERR: 
	I0722 04:27:29.815164    5100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2 +20000M
	I0722 04:27:29.822933    5100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:27:29.822949    5100 main.go:141] libmachine: STDERR: 
	I0722 04:27:29.822964    5100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2
	I0722 04:27:29.822969    5100 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:27:29.822986    5100 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:27:29.823022    5100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:32:34:7f:9f:04 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2
	I0722 04:27:29.824617    5100 main.go:141] libmachine: STDOUT: 
	I0722 04:27:29.824635    5100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:27:29.824659    5100 client.go:171] duration metric: took 224.258542ms to LocalClient.Create
	I0722 04:27:31.826253    5100 start.go:128] duration metric: took 2.24972825s to createHost
	I0722 04:27:31.826301    5100 start.go:83] releasing machines lock for "auto-055000", held for 2.2498365s
	W0722 04:27:31.826354    5100 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:31.835622    5100 out.go:177] * Deleting "auto-055000" in qemu2 ...
	W0722 04:27:31.855802    5100 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:31.855815    5100 start.go:729] Will try again in 5 seconds ...
	I0722 04:27:36.858030    5100 start.go:360] acquireMachinesLock for auto-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:27:36.858607    5100 start.go:364] duration metric: took 457.75µs to acquireMachinesLock for "auto-055000"
	I0722 04:27:36.858687    5100 start.go:93] Provisioning new machine with config: &{Name:auto-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.30.3 ClusterName:auto-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:27:36.859035    5100 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:27:36.864780    5100 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:27:36.914743    5100 start.go:159] libmachine.API.Create for "auto-055000" (driver="qemu2")
	I0722 04:27:36.914795    5100 client.go:168] LocalClient.Create starting
	I0722 04:27:36.914906    5100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:27:36.914978    5100 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:36.914997    5100 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:36.915068    5100 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:27:36.915113    5100 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:36.915125    5100 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:36.915654    5100 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:27:37.063577    5100 main.go:141] libmachine: Creating SSH key...
	I0722 04:27:37.142155    5100 main.go:141] libmachine: Creating Disk image...
	I0722 04:27:37.142161    5100 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:27:37.142378    5100 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2
	I0722 04:27:37.152276    5100 main.go:141] libmachine: STDOUT: 
	I0722 04:27:37.152297    5100 main.go:141] libmachine: STDERR: 
	I0722 04:27:37.152352    5100 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2 +20000M
	I0722 04:27:37.160464    5100 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:27:37.160479    5100 main.go:141] libmachine: STDERR: 
	I0722 04:27:37.160494    5100 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2
	I0722 04:27:37.160499    5100 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:27:37.160508    5100 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:27:37.160531    5100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:28:23:e8:3c:0f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/auto-055000/disk.qcow2
	I0722 04:27:37.162123    5100 main.go:141] libmachine: STDOUT: 
	I0722 04:27:37.162138    5100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:27:37.162150    5100 client.go:171] duration metric: took 247.354167ms to LocalClient.Create
	I0722 04:27:39.164244    5100 start.go:128] duration metric: took 2.305224375s to createHost
	I0722 04:27:39.164281    5100 start.go:83] releasing machines lock for "auto-055000", held for 2.3056925s
	W0722 04:27:39.164490    5100 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:39.168859    5100 out.go:177] 
	W0722 04:27:39.178761    5100 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:27:39.178772    5100 out.go:239] * 
	* 
	W0722 04:27:39.179781    5100 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:27:39.189837    5100 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
E0722 04:27:48.073242    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.803609083s)

                                                
                                                
-- stdout --
	* [calico-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-055000" primary control-plane node in "calico-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:27:41.326753    5214 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:27:41.326891    5214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:27:41.326894    5214 out.go:304] Setting ErrFile to fd 2...
	I0722 04:27:41.326896    5214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:27:41.327023    5214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:27:41.328099    5214 out.go:298] Setting JSON to false
	I0722 04:27:41.344917    5214 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5230,"bootTime":1721642431,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:27:41.345008    5214 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:27:41.351464    5214 out.go:177] * [calico-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:27:41.359493    5214 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:27:41.359501    5214 notify.go:220] Checking for updates...
	I0722 04:27:41.367414    5214 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:27:41.368833    5214 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:27:41.373442    5214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:27:41.376495    5214 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:27:41.377923    5214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:27:41.381813    5214 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:27:41.381884    5214 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:27:41.381962    5214 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:27:41.385413    5214 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:27:41.390415    5214 start.go:297] selected driver: qemu2
	I0722 04:27:41.390433    5214 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:27:41.390440    5214 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:27:41.392710    5214 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:27:41.396383    5214 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:27:41.397857    5214 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:27:41.397882    5214 cni.go:84] Creating CNI manager for "calico"
	I0722 04:27:41.397889    5214 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0722 04:27:41.397926    5214 start.go:340] cluster config:
	{Name:calico-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:calico-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:27:41.401604    5214 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:27:41.405474    5214 out.go:177] * Starting "calico-055000" primary control-plane node in "calico-055000" cluster
	I0722 04:27:41.413411    5214 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:27:41.413428    5214 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:27:41.413442    5214 cache.go:56] Caching tarball of preloaded images
	I0722 04:27:41.413512    5214 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:27:41.413518    5214 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:27:41.413585    5214 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/calico-055000/config.json ...
	I0722 04:27:41.413604    5214 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/calico-055000/config.json: {Name:mk5c6b0d9968f1e72ae7bb0d2f62892f5e92745e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:27:41.413808    5214 start.go:360] acquireMachinesLock for calico-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:27:41.413837    5214 start.go:364] duration metric: took 24.25µs to acquireMachinesLock for "calico-055000"
	I0722 04:27:41.413846    5214 start.go:93] Provisioning new machine with config: &{Name:calico-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:27:41.413876    5214 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:27:41.422405    5214 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:27:41.437719    5214 start.go:159] libmachine.API.Create for "calico-055000" (driver="qemu2")
	I0722 04:27:41.437745    5214 client.go:168] LocalClient.Create starting
	I0722 04:27:41.437803    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:27:41.437835    5214 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:41.437843    5214 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:41.437886    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:27:41.437911    5214 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:41.437916    5214 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:41.438264    5214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:27:41.576282    5214 main.go:141] libmachine: Creating SSH key...
	I0722 04:27:41.636819    5214 main.go:141] libmachine: Creating Disk image...
	I0722 04:27:41.636828    5214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:27:41.637065    5214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2
	I0722 04:27:41.647870    5214 main.go:141] libmachine: STDOUT: 
	I0722 04:27:41.647913    5214 main.go:141] libmachine: STDERR: 
	I0722 04:27:41.648024    5214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2 +20000M
	I0722 04:27:41.657876    5214 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:27:41.657907    5214 main.go:141] libmachine: STDERR: 
	I0722 04:27:41.657934    5214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2
	I0722 04:27:41.657948    5214 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:27:41.657963    5214 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:27:41.657990    5214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:f4:7d:26:bc:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2
	I0722 04:27:41.660119    5214 main.go:141] libmachine: STDOUT: 
	I0722 04:27:41.660144    5214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:27:41.660164    5214 client.go:171] duration metric: took 222.417375ms to LocalClient.Create
	I0722 04:27:43.662431    5214 start.go:128] duration metric: took 2.248553375s to createHost
	I0722 04:27:43.662566    5214 start.go:83] releasing machines lock for "calico-055000", held for 2.248755875s
	W0722 04:27:43.662617    5214 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:43.671844    5214 out.go:177] * Deleting "calico-055000" in qemu2 ...
	W0722 04:27:43.694314    5214 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:43.694356    5214 start.go:729] Will try again in 5 seconds ...
	I0722 04:27:48.696576    5214 start.go:360] acquireMachinesLock for calico-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:27:48.697129    5214 start.go:364] duration metric: took 463.959µs to acquireMachinesLock for "calico-055000"
	I0722 04:27:48.697263    5214 start.go:93] Provisioning new machine with config: &{Name:calico-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:calico-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:27:48.697505    5214 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:27:48.706027    5214 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:27:48.757737    5214 start.go:159] libmachine.API.Create for "calico-055000" (driver="qemu2")
	I0722 04:27:48.757785    5214 client.go:168] LocalClient.Create starting
	I0722 04:27:48.757926    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:27:48.757995    5214 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:48.758014    5214 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:48.758082    5214 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:27:48.758133    5214 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:48.758148    5214 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:48.758704    5214 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:27:48.905734    5214 main.go:141] libmachine: Creating SSH key...
	I0722 04:27:49.043175    5214 main.go:141] libmachine: Creating Disk image...
	I0722 04:27:49.043182    5214 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:27:49.043407    5214 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2
	I0722 04:27:49.052797    5214 main.go:141] libmachine: STDOUT: 
	I0722 04:27:49.052827    5214 main.go:141] libmachine: STDERR: 
	I0722 04:27:49.052882    5214 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2 +20000M
	I0722 04:27:49.060821    5214 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:27:49.060838    5214 main.go:141] libmachine: STDERR: 
	I0722 04:27:49.060849    5214 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2
	I0722 04:27:49.060855    5214 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:27:49.060867    5214 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:27:49.060913    5214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:06:bd:a4:b3:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/calico-055000/disk.qcow2
	I0722 04:27:49.062624    5214 main.go:141] libmachine: STDOUT: 
	I0722 04:27:49.062640    5214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:27:49.062651    5214 client.go:171] duration metric: took 304.865542ms to LocalClient.Create
	I0722 04:27:51.064877    5214 start.go:128] duration metric: took 2.367368792s to createHost
	I0722 04:27:51.064947    5214 start.go:83] releasing machines lock for "calico-055000", held for 2.367834375s
	W0722 04:27:51.065287    5214 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:51.073961    5214 out.go:177] 
	W0722 04:27:51.077905    5214 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:27:51.077921    5214 out.go:239] * 
	* 
	W0722 04:27:51.079487    5214 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:27:51.092924    5214 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.727096709s)

                                                
                                                
-- stdout --
	* [custom-flannel-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-055000" primary control-plane node in "custom-flannel-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:27:53.438245    5339 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:27:53.438390    5339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:27:53.438393    5339 out.go:304] Setting ErrFile to fd 2...
	I0722 04:27:53.438395    5339 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:27:53.438524    5339 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:27:53.439564    5339 out.go:298] Setting JSON to false
	I0722 04:27:53.455900    5339 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5242,"bootTime":1721642431,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:27:53.455960    5339 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:27:53.462615    5339 out.go:177] * [custom-flannel-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:27:53.470600    5339 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:27:53.470682    5339 notify.go:220] Checking for updates...
	I0722 04:27:53.478533    5339 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:27:53.479849    5339 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:27:53.483593    5339 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:27:53.486585    5339 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:27:53.487875    5339 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:27:53.490933    5339 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:27:53.490998    5339 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:27:53.491059    5339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:27:53.495529    5339 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:27:53.500561    5339 start.go:297] selected driver: qemu2
	I0722 04:27:53.500570    5339 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:27:53.500577    5339 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:27:53.502926    5339 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:27:53.507508    5339 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:27:53.508752    5339 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:27:53.508770    5339 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0722 04:27:53.508780    5339 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0722 04:27:53.508810    5339 start.go:340] cluster config:
	{Name:custom-flannel-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:27:53.512587    5339 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:27:53.520585    5339 out.go:177] * Starting "custom-flannel-055000" primary control-plane node in "custom-flannel-055000" cluster
	I0722 04:27:53.524514    5339 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:27:53.524532    5339 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:27:53.524544    5339 cache.go:56] Caching tarball of preloaded images
	I0722 04:27:53.524616    5339 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:27:53.524622    5339 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:27:53.524677    5339 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/custom-flannel-055000/config.json ...
	I0722 04:27:53.524689    5339 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/custom-flannel-055000/config.json: {Name:mk92dd01a1e13d0a3303170db5845f4caaffbab1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:27:53.524985    5339 start.go:360] acquireMachinesLock for custom-flannel-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:27:53.525018    5339 start.go:364] duration metric: took 24.542µs to acquireMachinesLock for "custom-flannel-055000"
	I0722 04:27:53.525027    5339 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:27:53.525051    5339 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:27:53.533515    5339 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:27:53.548466    5339 start.go:159] libmachine.API.Create for "custom-flannel-055000" (driver="qemu2")
	I0722 04:27:53.548494    5339 client.go:168] LocalClient.Create starting
	I0722 04:27:53.548551    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:27:53.548583    5339 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:53.548594    5339 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:53.548630    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:27:53.548652    5339 main.go:141] libmachine: Decoding PEM data...
	I0722 04:27:53.548660    5339 main.go:141] libmachine: Parsing certificate...
	I0722 04:27:53.548988    5339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:27:53.687306    5339 main.go:141] libmachine: Creating SSH key...
	I0722 04:27:53.756864    5339 main.go:141] libmachine: Creating Disk image...
	I0722 04:27:53.756876    5339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:27:53.757110    5339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2
	I0722 04:27:53.767730    5339 main.go:141] libmachine: STDOUT: 
	I0722 04:27:53.767764    5339 main.go:141] libmachine: STDERR: 
	I0722 04:27:53.767845    5339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2 +20000M
	I0722 04:27:53.777548    5339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:27:53.777569    5339 main.go:141] libmachine: STDERR: 
	I0722 04:27:53.777585    5339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2
	I0722 04:27:53.777594    5339 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:27:53.777610    5339 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:27:53.777644    5339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:3b:f4:3d:fd:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2
	I0722 04:27:53.779664    5339 main.go:141] libmachine: STDOUT: 
	I0722 04:27:53.779685    5339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:27:53.779703    5339 client.go:171] duration metric: took 231.2095ms to LocalClient.Create
	I0722 04:27:55.781883    5339 start.go:128] duration metric: took 2.256838583s to createHost
	I0722 04:27:55.781964    5339 start.go:83] releasing machines lock for "custom-flannel-055000", held for 2.256975083s
	W0722 04:27:55.782104    5339 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:55.793200    5339 out.go:177] * Deleting "custom-flannel-055000" in qemu2 ...
	W0722 04:27:55.820129    5339 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:27:55.820163    5339 start.go:729] Will try again in 5 seconds ...
	I0722 04:28:00.822414    5339 start.go:360] acquireMachinesLock for custom-flannel-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:00.823165    5339 start.go:364] duration metric: took 643.083µs to acquireMachinesLock for "custom-flannel-055000"
	I0722 04:28:00.823447    5339 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.30.3 ClusterName:custom-flannel-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:00.823722    5339 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:00.830251    5339 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:00.877533    5339 start.go:159] libmachine.API.Create for "custom-flannel-055000" (driver="qemu2")
	I0722 04:28:00.877599    5339 client.go:168] LocalClient.Create starting
	I0722 04:28:00.877740    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:00.877800    5339 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:00.877815    5339 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:00.877887    5339 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:00.877936    5339 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:00.877949    5339 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:00.878471    5339 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:01.025132    5339 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:01.084116    5339 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:01.084125    5339 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:01.084318    5339 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2
	I0722 04:28:01.093686    5339 main.go:141] libmachine: STDOUT: 
	I0722 04:28:01.093713    5339 main.go:141] libmachine: STDERR: 
	I0722 04:28:01.093778    5339 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2 +20000M
	I0722 04:28:01.101932    5339 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:01.101956    5339 main.go:141] libmachine: STDERR: 
	I0722 04:28:01.101973    5339 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2
	I0722 04:28:01.101978    5339 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:01.101984    5339 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:01.102008    5339 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:9b:9a:0d:1d:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/custom-flannel-055000/disk.qcow2
	I0722 04:28:01.103636    5339 main.go:141] libmachine: STDOUT: 
	I0722 04:28:01.103656    5339 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:01.103669    5339 client.go:171] duration metric: took 226.068333ms to LocalClient.Create
	I0722 04:28:03.105699    5339 start.go:128] duration metric: took 2.281997625s to createHost
	I0722 04:28:03.105719    5339 start.go:83] releasing machines lock for "custom-flannel-055000", held for 2.28257325s
	W0722 04:28:03.105798    5339 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:03.113330    5339 out.go:177] 
	W0722 04:28:03.118446    5339 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:28:03.118453    5339 out.go:239] * 
	* 
	W0722 04:28:03.118965    5339 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:28:03.129376    5339 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.954004958s)

                                                
                                                
-- stdout --
	* [false-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-055000" primary control-plane node in "false-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:28:05.476723    5460 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:28:05.476849    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:05.476853    5460 out.go:304] Setting ErrFile to fd 2...
	I0722 04:28:05.476855    5460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:05.477001    5460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:28:05.477986    5460 out.go:298] Setting JSON to false
	I0722 04:28:05.494156    5460 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5254,"bootTime":1721642431,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:28:05.494231    5460 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:28:05.500327    5460 out.go:177] * [false-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:28:05.507299    5460 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:28:05.507354    5460 notify.go:220] Checking for updates...
	I0722 04:28:05.515252    5460 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:28:05.518269    5460 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:28:05.522280    5460 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:28:05.525303    5460 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:28:05.528274    5460 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:28:05.531535    5460 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:28:05.531601    5460 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:28:05.531644    5460 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:28:05.535238    5460 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:28:05.542229    5460 start.go:297] selected driver: qemu2
	I0722 04:28:05.542235    5460 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:28:05.542241    5460 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:28:05.544489    5460 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:28:05.547304    5460 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:28:05.551321    5460 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:28:05.551336    5460 cni.go:84] Creating CNI manager for "false"
	I0722 04:28:05.551362    5460 start.go:340] cluster config:
	{Name:false-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:28:05.554818    5460 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:28:05.563219    5460 out.go:177] * Starting "false-055000" primary control-plane node in "false-055000" cluster
	I0722 04:28:05.567226    5460 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:28:05.567239    5460 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:28:05.567249    5460 cache.go:56] Caching tarball of preloaded images
	I0722 04:28:05.567298    5460 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:28:05.567303    5460 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:28:05.567361    5460 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/false-055000/config.json ...
	I0722 04:28:05.567373    5460 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/false-055000/config.json: {Name:mkcb59e4e8a12cdf4cf9d9e94c9843c29195313b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:28:05.567598    5460 start.go:360] acquireMachinesLock for false-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:05.567631    5460 start.go:364] duration metric: took 27.458µs to acquireMachinesLock for "false-055000"
	I0722 04:28:05.567641    5460 start.go:93] Provisioning new machine with config: &{Name:false-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:05.567666    5460 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:05.571259    5460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:05.585963    5460 start.go:159] libmachine.API.Create for "false-055000" (driver="qemu2")
	I0722 04:28:05.585994    5460 client.go:168] LocalClient.Create starting
	I0722 04:28:05.586053    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:05.586086    5460 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:05.586094    5460 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:05.586129    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:05.586150    5460 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:05.586163    5460 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:05.586545    5460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:05.725368    5460 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:05.852102    5460 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:05.852110    5460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:05.852303    5460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2
	I0722 04:28:05.862589    5460 main.go:141] libmachine: STDOUT: 
	I0722 04:28:05.862612    5460 main.go:141] libmachine: STDERR: 
	I0722 04:28:05.862697    5460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2 +20000M
	I0722 04:28:05.872517    5460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:05.872545    5460 main.go:141] libmachine: STDERR: 
	I0722 04:28:05.872565    5460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2
	I0722 04:28:05.872584    5460 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:05.872598    5460 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:05.872640    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:39:35:c8:66:d9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2
	I0722 04:28:05.874954    5460 main.go:141] libmachine: STDOUT: 
	I0722 04:28:05.874979    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:05.875002    5460 client.go:171] duration metric: took 289.008083ms to LocalClient.Create
	I0722 04:28:07.877180    5460 start.go:128] duration metric: took 2.309528917s to createHost
	I0722 04:28:07.877267    5460 start.go:83] releasing machines lock for "false-055000", held for 2.309664s
	W0722 04:28:07.877315    5460 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:07.888501    5460 out.go:177] * Deleting "false-055000" in qemu2 ...
	W0722 04:28:07.916540    5460 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:07.916578    5460 start.go:729] Will try again in 5 seconds ...
	I0722 04:28:12.918790    5460 start.go:360] acquireMachinesLock for false-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:12.919527    5460 start.go:364] duration metric: took 599.333µs to acquireMachinesLock for "false-055000"
	I0722 04:28:12.919643    5460 start.go:93] Provisioning new machine with config: &{Name:false-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:false-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:12.919931    5460 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:12.925677    5460 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:12.971225    5460 start.go:159] libmachine.API.Create for "false-055000" (driver="qemu2")
	I0722 04:28:12.971281    5460 client.go:168] LocalClient.Create starting
	I0722 04:28:12.971394    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:12.971455    5460 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:12.971472    5460 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:12.971531    5460 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:12.971585    5460 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:12.971606    5460 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:12.972141    5460 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:13.119712    5460 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:13.340965    5460 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:13.340977    5460 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:13.341247    5460 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2
	I0722 04:28:13.351138    5460 main.go:141] libmachine: STDOUT: 
	I0722 04:28:13.351157    5460 main.go:141] libmachine: STDERR: 
	I0722 04:28:13.351224    5460 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2 +20000M
	I0722 04:28:13.359223    5460 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:13.359287    5460 main.go:141] libmachine: STDERR: 
	I0722 04:28:13.359298    5460 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2
	I0722 04:28:13.359303    5460 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:13.359316    5460 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:13.359352    5460 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:50:e8:62:b4:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/false-055000/disk.qcow2
	I0722 04:28:13.361094    5460 main.go:141] libmachine: STDOUT: 
	I0722 04:28:13.361155    5460 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:13.361170    5460 client.go:171] duration metric: took 389.888959ms to LocalClient.Create
	I0722 04:28:15.363346    5460 start.go:128] duration metric: took 2.443418583s to createHost
	I0722 04:28:15.363462    5460 start.go:83] releasing machines lock for "false-055000", held for 2.443915125s
	W0722 04:28:15.363789    5460 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:15.372319    5460 out.go:177] 
	W0722 04:28:15.378406    5460 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:28:15.378429    5460 out.go:239] * 
	* 
	W0722 04:28:15.381019    5460 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:28:15.389380    5460 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.907469375s)

                                                
                                                
-- stdout --
	* [kindnet-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-055000" primary control-plane node in "kindnet-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:28:17.553798    5575 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:28:17.553932    5575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:17.553935    5575 out.go:304] Setting ErrFile to fd 2...
	I0722 04:28:17.553938    5575 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:17.554082    5575 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:28:17.555160    5575 out.go:298] Setting JSON to false
	I0722 04:28:17.571465    5575 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5266,"bootTime":1721642431,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:28:17.571526    5575 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:28:17.577499    5575 out.go:177] * [kindnet-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:28:17.584403    5575 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:28:17.584476    5575 notify.go:220] Checking for updates...
	I0722 04:28:17.590460    5575 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:28:17.593462    5575 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:28:17.596513    5575 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:28:17.599489    5575 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:28:17.600889    5575 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:28:17.603768    5575 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:28:17.603829    5575 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:28:17.603888    5575 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:28:17.607480    5575 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:28:17.612458    5575 start.go:297] selected driver: qemu2
	I0722 04:28:17.612464    5575 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:28:17.612470    5575 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:28:17.614490    5575 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:28:17.618461    5575 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:28:17.619625    5575 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:28:17.619654    5575 cni.go:84] Creating CNI manager for "kindnet"
	I0722 04:28:17.619658    5575 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0722 04:28:17.619695    5575 start.go:340] cluster config:
	{Name:kindnet-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kindnet-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:28:17.622974    5575 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:28:17.631502    5575 out.go:177] * Starting "kindnet-055000" primary control-plane node in "kindnet-055000" cluster
	I0722 04:28:17.635567    5575 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:28:17.635583    5575 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:28:17.635593    5575 cache.go:56] Caching tarball of preloaded images
	I0722 04:28:17.635658    5575 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:28:17.635666    5575 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:28:17.635731    5575 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/kindnet-055000/config.json ...
	I0722 04:28:17.635743    5575 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/kindnet-055000/config.json: {Name:mk5d8e9751a5316e14d49681ef3dd4a414b81427 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:28:17.635925    5575 start.go:360] acquireMachinesLock for kindnet-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:17.635958    5575 start.go:364] duration metric: took 27.875µs to acquireMachinesLock for "kindnet-055000"
	I0722 04:28:17.635972    5575 start.go:93] Provisioning new machine with config: &{Name:kindnet-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:17.635999    5575 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:17.643450    5575 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:17.658418    5575 start.go:159] libmachine.API.Create for "kindnet-055000" (driver="qemu2")
	I0722 04:28:17.658446    5575 client.go:168] LocalClient.Create starting
	I0722 04:28:17.658517    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:17.658547    5575 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:17.658556    5575 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:17.658597    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:17.658620    5575 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:17.658628    5575 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:17.658978    5575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:17.797130    5575 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:18.028669    5575 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:18.028678    5575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:18.028880    5575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2
	I0722 04:28:18.038672    5575 main.go:141] libmachine: STDOUT: 
	I0722 04:28:18.038692    5575 main.go:141] libmachine: STDERR: 
	I0722 04:28:18.038752    5575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2 +20000M
	I0722 04:28:18.046724    5575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:18.046738    5575 main.go:141] libmachine: STDERR: 
	I0722 04:28:18.046757    5575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2
	I0722 04:28:18.046761    5575 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:18.046777    5575 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:18.046801    5575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:8b:47:3b:01:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2
	I0722 04:28:18.048447    5575 main.go:141] libmachine: STDOUT: 
	I0722 04:28:18.048459    5575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:18.048488    5575 client.go:171] duration metric: took 390.043667ms to LocalClient.Create
	I0722 04:28:20.050593    5575 start.go:128] duration metric: took 2.414623292s to createHost
	I0722 04:28:20.050630    5575 start.go:83] releasing machines lock for "kindnet-055000", held for 2.414706625s
	W0722 04:28:20.050661    5575 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:20.062591    5575 out.go:177] * Deleting "kindnet-055000" in qemu2 ...
	W0722 04:28:20.077287    5575 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:20.077296    5575 start.go:729] Will try again in 5 seconds ...
	I0722 04:28:25.079456    5575 start.go:360] acquireMachinesLock for kindnet-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:25.079729    5575 start.go:364] duration metric: took 209.5µs to acquireMachinesLock for "kindnet-055000"
	I0722 04:28:25.079816    5575 start.go:93] Provisioning new machine with config: &{Name:kindnet-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kindnet-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:25.079937    5575 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:25.089104    5575 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:25.121116    5575 start.go:159] libmachine.API.Create for "kindnet-055000" (driver="qemu2")
	I0722 04:28:25.121164    5575 client.go:168] LocalClient.Create starting
	I0722 04:28:25.121275    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:25.121332    5575 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:25.121349    5575 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:25.121408    5575 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:25.121443    5575 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:25.121455    5575 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:25.121989    5575 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:25.288333    5575 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:25.372851    5575 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:25.372857    5575 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:25.373060    5575 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2
	I0722 04:28:25.382516    5575 main.go:141] libmachine: STDOUT: 
	I0722 04:28:25.382540    5575 main.go:141] libmachine: STDERR: 
	I0722 04:28:25.382596    5575 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2 +20000M
	I0722 04:28:25.390538    5575 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:25.390554    5575 main.go:141] libmachine: STDERR: 
	I0722 04:28:25.390565    5575 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2
	I0722 04:28:25.390570    5575 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:25.390587    5575 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:25.390629    5575 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:31:a8:9e:d3:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kindnet-055000/disk.qcow2
	I0722 04:28:25.392258    5575 main.go:141] libmachine: STDOUT: 
	I0722 04:28:25.392275    5575 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:25.392288    5575 client.go:171] duration metric: took 271.122291ms to LocalClient.Create
	I0722 04:28:27.394469    5575 start.go:128] duration metric: took 2.314540375s to createHost
	I0722 04:28:27.394602    5575 start.go:83] releasing machines lock for "kindnet-055000", held for 2.314877333s
	W0722 04:28:27.394989    5575 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:27.404511    5575 out.go:177] 
	W0722 04:28:27.408559    5575 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:28:27.408593    5575 out.go:239] * 
	* 
	W0722 04:28:27.411024    5575 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:28:27.420493    5575 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.821300042s)

                                                
                                                
-- stdout --
	* [flannel-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-055000" primary control-plane node in "flannel-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:28:29.703831    5692 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:28:29.703965    5692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:29.703969    5692 out.go:304] Setting ErrFile to fd 2...
	I0722 04:28:29.703971    5692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:29.704119    5692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:28:29.705194    5692 out.go:298] Setting JSON to false
	I0722 04:28:29.721325    5692 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5278,"bootTime":1721642431,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:28:29.721396    5692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:28:29.727350    5692 out.go:177] * [flannel-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:28:29.735430    5692 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:28:29.735499    5692 notify.go:220] Checking for updates...
	I0722 04:28:29.743373    5692 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:28:29.746401    5692 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:28:29.749431    5692 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:28:29.752375    5692 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:28:29.755357    5692 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:28:29.758688    5692 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:28:29.758757    5692 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:28:29.758803    5692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:28:29.762460    5692 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:28:29.769390    5692 start.go:297] selected driver: qemu2
	I0722 04:28:29.769396    5692 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:28:29.769405    5692 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:28:29.771631    5692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:28:29.774368    5692 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:28:29.777368    5692 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:28:29.777383    5692 cni.go:84] Creating CNI manager for "flannel"
	I0722 04:28:29.777386    5692 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0722 04:28:29.777409    5692 start.go:340] cluster config:
	{Name:flannel-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:flannel-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:28:29.780735    5692 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:28:29.788380    5692 out.go:177] * Starting "flannel-055000" primary control-plane node in "flannel-055000" cluster
	I0722 04:28:29.792387    5692 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:28:29.792402    5692 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:28:29.792412    5692 cache.go:56] Caching tarball of preloaded images
	I0722 04:28:29.792464    5692 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:28:29.792470    5692 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:28:29.792531    5692 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/flannel-055000/config.json ...
	I0722 04:28:29.792543    5692 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/flannel-055000/config.json: {Name:mk36fede67b7b0b37d0577a27969a527a4850cd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:28:29.792762    5692 start.go:360] acquireMachinesLock for flannel-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:29.792794    5692 start.go:364] duration metric: took 26.417µs to acquireMachinesLock for "flannel-055000"
	I0722 04:28:29.792805    5692 start.go:93] Provisioning new machine with config: &{Name:flannel-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:29.792830    5692 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:29.801372    5692 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:29.817619    5692 start.go:159] libmachine.API.Create for "flannel-055000" (driver="qemu2")
	I0722 04:28:29.817649    5692 client.go:168] LocalClient.Create starting
	I0722 04:28:29.817714    5692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:29.817743    5692 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:29.817750    5692 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:29.817787    5692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:29.817810    5692 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:29.817819    5692 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:29.818159    5692 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:29.956178    5692 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:30.086388    5692 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:30.086398    5692 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:30.086626    5692 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2
	I0722 04:28:30.096344    5692 main.go:141] libmachine: STDOUT: 
	I0722 04:28:30.096363    5692 main.go:141] libmachine: STDERR: 
	I0722 04:28:30.096419    5692 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2 +20000M
	I0722 04:28:30.104256    5692 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:30.104272    5692 main.go:141] libmachine: STDERR: 
	I0722 04:28:30.104291    5692 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2
	I0722 04:28:30.104297    5692 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:30.104307    5692 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:30.104331    5692 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:33:2c:6d:83:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2
	I0722 04:28:30.105978    5692 main.go:141] libmachine: STDOUT: 
	I0722 04:28:30.105992    5692 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:30.106009    5692 client.go:171] duration metric: took 288.359792ms to LocalClient.Create
	I0722 04:28:32.108153    5692 start.go:128] duration metric: took 2.315348s to createHost
	I0722 04:28:32.108185    5692 start.go:83] releasing machines lock for "flannel-055000", held for 2.315423625s
	W0722 04:28:32.108225    5692 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:32.116310    5692 out.go:177] * Deleting "flannel-055000" in qemu2 ...
	W0722 04:28:32.129540    5692 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:32.129567    5692 start.go:729] Will try again in 5 seconds ...
	I0722 04:28:37.131737    5692 start.go:360] acquireMachinesLock for flannel-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:37.132207    5692 start.go:364] duration metric: took 345.542µs to acquireMachinesLock for "flannel-055000"
	I0722 04:28:37.132302    5692 start.go:93] Provisioning new machine with config: &{Name:flannel-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:flannel-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:37.132546    5692 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:37.138291    5692 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:37.180240    5692 start.go:159] libmachine.API.Create for "flannel-055000" (driver="qemu2")
	I0722 04:28:37.180289    5692 client.go:168] LocalClient.Create starting
	I0722 04:28:37.180413    5692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:37.180474    5692 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:37.180496    5692 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:37.180553    5692 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:37.180593    5692 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:37.180657    5692 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:37.181148    5692 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:37.329287    5692 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:37.435297    5692 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:37.435305    5692 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:37.435516    5692 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2
	I0722 04:28:37.444874    5692 main.go:141] libmachine: STDOUT: 
	I0722 04:28:37.444892    5692 main.go:141] libmachine: STDERR: 
	I0722 04:28:37.444951    5692 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2 +20000M
	I0722 04:28:37.452793    5692 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:37.452811    5692 main.go:141] libmachine: STDERR: 
	I0722 04:28:37.452825    5692 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2
	I0722 04:28:37.452835    5692 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:37.452843    5692 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:37.452875    5692 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:71:4b:3b:6a:8c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/flannel-055000/disk.qcow2
	I0722 04:28:37.454485    5692 main.go:141] libmachine: STDOUT: 
	I0722 04:28:37.454509    5692 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:37.454523    5692 client.go:171] duration metric: took 274.232958ms to LocalClient.Create
	I0722 04:28:39.456657    5692 start.go:128] duration metric: took 2.324113083s to createHost
	I0722 04:28:39.456719    5692 start.go:83] releasing machines lock for "flannel-055000", held for 2.32453075s
	W0722 04:28:39.456974    5692 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:39.469482    5692 out.go:177] 
	W0722 04:28:39.473503    5692 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:28:39.473651    5692 out.go:239] * 
	* 
	W0722 04:28:39.475877    5692 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:28:39.487411    5692 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.733512042s)

                                                
                                                
-- stdout --
	* [enable-default-cni-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-055000" primary control-plane node in "enable-default-cni-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:28:41.852453    5814 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:28:41.852580    5814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:41.852583    5814 out.go:304] Setting ErrFile to fd 2...
	I0722 04:28:41.852585    5814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:41.852726    5814 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:28:41.853775    5814 out.go:298] Setting JSON to false
	I0722 04:28:41.870048    5814 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5290,"bootTime":1721642431,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:28:41.870120    5814 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:28:41.876620    5814 out.go:177] * [enable-default-cni-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:28:41.883603    5814 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:28:41.883638    5814 notify.go:220] Checking for updates...
	I0722 04:28:41.890542    5814 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:28:41.893595    5814 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:28:41.896667    5814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:28:41.899544    5814 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:28:41.902555    5814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:28:41.905946    5814 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:28:41.906017    5814 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:28:41.906059    5814 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:28:41.909505    5814 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:28:41.916573    5814 start.go:297] selected driver: qemu2
	I0722 04:28:41.916581    5814 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:28:41.916588    5814 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:28:41.918992    5814 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:28:41.922463    5814 out.go:177] * Automatically selected the socket_vmnet network
	E0722 04:28:41.925650    5814 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0722 04:28:41.925666    5814 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:28:41.925701    5814 cni.go:84] Creating CNI manager for "bridge"
	I0722 04:28:41.925708    5814 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:28:41.925737    5814 start.go:340] cluster config:
	{Name:enable-default-cni-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:28:41.929706    5814 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:28:41.937564    5814 out.go:177] * Starting "enable-default-cni-055000" primary control-plane node in "enable-default-cni-055000" cluster
	I0722 04:28:41.941532    5814 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:28:41.941550    5814 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:28:41.941569    5814 cache.go:56] Caching tarball of preloaded images
	I0722 04:28:41.941633    5814 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:28:41.941639    5814 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:28:41.941700    5814 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/enable-default-cni-055000/config.json ...
	I0722 04:28:41.941713    5814 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/enable-default-cni-055000/config.json: {Name:mk85412e0460289f225c01dcf05048a9f190db46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:28:41.941918    5814 start.go:360] acquireMachinesLock for enable-default-cni-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:41.941950    5814 start.go:364] duration metric: took 26.25µs to acquireMachinesLock for "enable-default-cni-055000"
	I0722 04:28:41.941960    5814 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:41.941990    5814 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:41.945540    5814 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:41.960768    5814 start.go:159] libmachine.API.Create for "enable-default-cni-055000" (driver="qemu2")
	I0722 04:28:41.960793    5814 client.go:168] LocalClient.Create starting
	I0722 04:28:41.960857    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:41.960886    5814 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:41.960895    5814 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:41.960930    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:41.960952    5814 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:41.960959    5814 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:41.961292    5814 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:42.098724    5814 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:42.183303    5814 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:42.183312    5814 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:42.183760    5814 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2
	I0722 04:28:42.193338    5814 main.go:141] libmachine: STDOUT: 
	I0722 04:28:42.193362    5814 main.go:141] libmachine: STDERR: 
	I0722 04:28:42.193409    5814 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2 +20000M
	I0722 04:28:42.201631    5814 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:42.201647    5814 main.go:141] libmachine: STDERR: 
	I0722 04:28:42.201662    5814 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2
	I0722 04:28:42.201668    5814 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:42.201685    5814 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:42.201712    5814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:b6:aa:23:82:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2
	I0722 04:28:42.203462    5814 main.go:141] libmachine: STDOUT: 
	I0722 04:28:42.203479    5814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:42.203496    5814 client.go:171] duration metric: took 242.703459ms to LocalClient.Create
	I0722 04:28:44.205648    5814 start.go:128] duration metric: took 2.263671792s to createHost
	I0722 04:28:44.205708    5814 start.go:83] releasing machines lock for "enable-default-cni-055000", held for 2.263789292s
	W0722 04:28:44.205781    5814 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:44.216644    5814 out.go:177] * Deleting "enable-default-cni-055000" in qemu2 ...
	W0722 04:28:44.235561    5814 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:44.235589    5814 start.go:729] Will try again in 5 seconds ...
	I0722 04:28:49.237740    5814 start.go:360] acquireMachinesLock for enable-default-cni-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:49.238201    5814 start.go:364] duration metric: took 368.083µs to acquireMachinesLock for "enable-default-cni-055000"
	I0722 04:28:49.238324    5814 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:enable-default-cni-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:49.238583    5814 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:49.247133    5814 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:49.285891    5814 start.go:159] libmachine.API.Create for "enable-default-cni-055000" (driver="qemu2")
	I0722 04:28:49.285942    5814 client.go:168] LocalClient.Create starting
	I0722 04:28:49.286054    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:49.286125    5814 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:49.286140    5814 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:49.286197    5814 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:49.286243    5814 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:49.286258    5814 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:49.286783    5814 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:49.432849    5814 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:49.494612    5814 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:49.494621    5814 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:49.494815    5814 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2
	I0722 04:28:49.504187    5814 main.go:141] libmachine: STDOUT: 
	I0722 04:28:49.504212    5814 main.go:141] libmachine: STDERR: 
	I0722 04:28:49.504272    5814 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2 +20000M
	I0722 04:28:49.512177    5814 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:49.512194    5814 main.go:141] libmachine: STDERR: 
	I0722 04:28:49.512207    5814 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2
	I0722 04:28:49.512212    5814 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:49.512222    5814 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:49.512264    5814 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:0d:a8:8b:a9:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/enable-default-cni-055000/disk.qcow2
	I0722 04:28:49.513998    5814 main.go:141] libmachine: STDOUT: 
	I0722 04:28:49.514015    5814 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:49.514028    5814 client.go:171] duration metric: took 228.085417ms to LocalClient.Create
	I0722 04:28:51.516199    5814 start.go:128] duration metric: took 2.277620875s to createHost
	I0722 04:28:51.516284    5814 start.go:83] releasing machines lock for "enable-default-cni-055000", held for 2.278100417s
	W0722 04:28:51.516716    5814 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:51.526354    5814 out.go:177] 
	W0722 04:28:51.532494    5814 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:28:51.532541    5814 out.go:239] * 
	* 
	W0722 04:28:51.534498    5814 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:28:51.544327    5814 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.716513834s)

                                                
                                                
-- stdout --
	* [bridge-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-055000" primary control-plane node in "bridge-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:28:53.750369    5927 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:28:53.750518    5927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:53.750526    5927 out.go:304] Setting ErrFile to fd 2...
	I0722 04:28:53.750529    5927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:28:53.750669    5927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:28:53.751823    5927 out.go:298] Setting JSON to false
	I0722 04:28:53.768022    5927 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5302,"bootTime":1721642431,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:28:53.768091    5927 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:28:53.773708    5927 out.go:177] * [bridge-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:28:53.780700    5927 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:28:53.780733    5927 notify.go:220] Checking for updates...
	I0722 04:28:53.787647    5927 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:28:53.790607    5927 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:28:53.793616    5927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:28:53.796525    5927 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:28:53.799615    5927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:28:53.803002    5927 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:28:53.803064    5927 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:28:53.803111    5927 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:28:53.806550    5927 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:28:53.813622    5927 start.go:297] selected driver: qemu2
	I0722 04:28:53.813628    5927 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:28:53.813633    5927 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:28:53.815754    5927 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:28:53.816975    5927 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:28:53.819660    5927 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:28:53.819680    5927 cni.go:84] Creating CNI manager for "bridge"
	I0722 04:28:53.819682    5927 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:28:53.819710    5927 start.go:340] cluster config:
	{Name:bridge-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:bridge-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:28:53.823111    5927 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:28:53.830597    5927 out.go:177] * Starting "bridge-055000" primary control-plane node in "bridge-055000" cluster
	I0722 04:28:53.834619    5927 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:28:53.834634    5927 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:28:53.834645    5927 cache.go:56] Caching tarball of preloaded images
	I0722 04:28:53.834700    5927 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:28:53.834705    5927 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:28:53.834774    5927 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/bridge-055000/config.json ...
	I0722 04:28:53.834788    5927 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/bridge-055000/config.json: {Name:mk3bbbc55af37fe932949e0ca8d0df190752cab9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:28:53.835061    5927 start.go:360] acquireMachinesLock for bridge-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:28:53.835090    5927 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "bridge-055000"
	I0722 04:28:53.835100    5927 start.go:93] Provisioning new machine with config: &{Name:bridge-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:28:53.835124    5927 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:28:53.842630    5927 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:28:53.857669    5927 start.go:159] libmachine.API.Create for "bridge-055000" (driver="qemu2")
	I0722 04:28:53.857692    5927 client.go:168] LocalClient.Create starting
	I0722 04:28:53.857754    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:28:53.857783    5927 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:53.857794    5927 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:53.857834    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:28:53.857856    5927 main.go:141] libmachine: Decoding PEM data...
	I0722 04:28:53.857867    5927 main.go:141] libmachine: Parsing certificate...
	I0722 04:28:53.858269    5927 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:28:53.997478    5927 main.go:141] libmachine: Creating SSH key...
	I0722 04:28:54.045024    5927 main.go:141] libmachine: Creating Disk image...
	I0722 04:28:54.045031    5927 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:28:54.045226    5927 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2
	I0722 04:28:54.054426    5927 main.go:141] libmachine: STDOUT: 
	I0722 04:28:54.054444    5927 main.go:141] libmachine: STDERR: 
	I0722 04:28:54.054496    5927 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2 +20000M
	I0722 04:28:54.062780    5927 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:28:54.062796    5927 main.go:141] libmachine: STDERR: 
	I0722 04:28:54.062811    5927 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2
	I0722 04:28:54.062814    5927 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:28:54.062828    5927 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:28:54.062855    5927 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:d6:b9:8a:cb:4b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2
	I0722 04:28:54.064601    5927 main.go:141] libmachine: STDOUT: 
	I0722 04:28:54.064617    5927 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:28:54.064635    5927 client.go:171] duration metric: took 206.942666ms to LocalClient.Create
	I0722 04:28:56.066976    5927 start.go:128] duration metric: took 2.231844958s to createHost
	I0722 04:28:56.067088    5927 start.go:83] releasing machines lock for "bridge-055000", held for 2.232025167s
	W0722 04:28:56.067152    5927 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:56.077056    5927 out.go:177] * Deleting "bridge-055000" in qemu2 ...
	W0722 04:28:56.099629    5927 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:28:56.099646    5927 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:01.101756    5927 start.go:360] acquireMachinesLock for bridge-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:01.102061    5927 start.go:364] duration metric: took 248.834µs to acquireMachinesLock for "bridge-055000"
	I0722 04:29:01.102097    5927 start.go:93] Provisioning new machine with config: &{Name:bridge-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.30.3 ClusterName:bridge-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:01.102267    5927 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:01.111772    5927 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:29:01.147996    5927 start.go:159] libmachine.API.Create for "bridge-055000" (driver="qemu2")
	I0722 04:29:01.148053    5927 client.go:168] LocalClient.Create starting
	I0722 04:29:01.148156    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:01.148212    5927 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:01.148224    5927 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:01.148284    5927 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:01.148322    5927 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:01.148336    5927 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:01.148978    5927 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:01.296565    5927 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:01.379155    5927 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:01.379163    5927 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:01.379386    5927 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2
	I0722 04:29:01.388786    5927 main.go:141] libmachine: STDOUT: 
	I0722 04:29:01.388804    5927 main.go:141] libmachine: STDERR: 
	I0722 04:29:01.388862    5927 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2 +20000M
	I0722 04:29:01.396851    5927 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:01.396866    5927 main.go:141] libmachine: STDERR: 
	I0722 04:29:01.396878    5927 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2
	I0722 04:29:01.396883    5927 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:01.396897    5927 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:01.396925    5927 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:a4:46:45:b6:64 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/bridge-055000/disk.qcow2
	I0722 04:29:01.398550    5927 main.go:141] libmachine: STDOUT: 
	I0722 04:29:01.398566    5927 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:01.398576    5927 client.go:171] duration metric: took 250.523167ms to LocalClient.Create
	I0722 04:29:03.400744    5927 start.go:128] duration metric: took 2.298479917s to createHost
	I0722 04:29:03.400830    5927 start.go:83] releasing machines lock for "bridge-055000", held for 2.298790625s
	W0722 04:29:03.401269    5927 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:03.409944    5927 out.go:177] 
	W0722 04:29:03.416163    5927 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:03.416216    5927 out.go:239] * 
	* 
	W0722 04:29:03.419322    5927 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:03.427004    5927 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-055000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.859772208s)

                                                
                                                
-- stdout --
	* [kubenet-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-055000" primary control-plane node in "kubenet-055000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-055000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:05.585313    6042 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:05.585440    6042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:05.585444    6042 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:05.585446    6042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:05.585567    6042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:05.586622    6042 out.go:298] Setting JSON to false
	I0722 04:29:05.602952    6042 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5314,"bootTime":1721642431,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:05.603058    6042 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:05.607930    6042 out.go:177] * [kubenet-055000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:05.615766    6042 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:05.615891    6042 notify.go:220] Checking for updates...
	I0722 04:29:05.623715    6042 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:05.627734    6042 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:05.629207    6042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:05.632736    6042 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:05.635748    6042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:05.639016    6042 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:05.639079    6042 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:29:05.639128    6042 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:05.641707    6042 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:29:05.648782    6042 start.go:297] selected driver: qemu2
	I0722 04:29:05.648789    6042 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:29:05.648796    6042 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:05.651223    6042 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:29:05.654740    6042 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:29:05.657829    6042 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:05.657842    6042 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0722 04:29:05.657872    6042 start.go:340] cluster config:
	{Name:kubenet-055000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:kubenet-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:05.661591    6042 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:05.669761    6042 out.go:177] * Starting "kubenet-055000" primary control-plane node in "kubenet-055000" cluster
	I0722 04:29:05.673839    6042 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:29:05.673856    6042 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:29:05.673868    6042 cache.go:56] Caching tarball of preloaded images
	I0722 04:29:05.673940    6042 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:29:05.673946    6042 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:29:05.674011    6042 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/kubenet-055000/config.json ...
	I0722 04:29:05.674024    6042 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/kubenet-055000/config.json: {Name:mk7f9d8031c05e00c794439b621964351f79db03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:29:05.674248    6042 start.go:360] acquireMachinesLock for kubenet-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:05.674283    6042 start.go:364] duration metric: took 29.084µs to acquireMachinesLock for "kubenet-055000"
	I0722 04:29:05.674294    6042 start.go:93] Provisioning new machine with config: &{Name:kubenet-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:05.674333    6042 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:05.681777    6042 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:29:05.698978    6042 start.go:159] libmachine.API.Create for "kubenet-055000" (driver="qemu2")
	I0722 04:29:05.699009    6042 client.go:168] LocalClient.Create starting
	I0722 04:29:05.699074    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:05.699102    6042 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:05.699110    6042 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:05.699149    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:05.699171    6042 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:05.699182    6042 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:05.699567    6042 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:05.840943    6042 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:05.991835    6042 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:05.991844    6042 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:05.992027    6042 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2
	I0722 04:29:06.001938    6042 main.go:141] libmachine: STDOUT: 
	I0722 04:29:06.001959    6042 main.go:141] libmachine: STDERR: 
	I0722 04:29:06.002029    6042 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2 +20000M
	I0722 04:29:06.017095    6042 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:06.017110    6042 main.go:141] libmachine: STDERR: 
	I0722 04:29:06.017122    6042 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2
	I0722 04:29:06.017126    6042 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:06.017142    6042 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:06.017176    6042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:53:14:4c:01:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2
	I0722 04:29:06.018992    6042 main.go:141] libmachine: STDOUT: 
	I0722 04:29:06.019007    6042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:06.019023    6042 client.go:171] duration metric: took 320.015542ms to LocalClient.Create
	I0722 04:29:08.021183    6042 start.go:128] duration metric: took 2.346860791s to createHost
	I0722 04:29:08.021292    6042 start.go:83] releasing machines lock for "kubenet-055000", held for 2.347037917s
	W0722 04:29:08.021356    6042 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:08.029782    6042 out.go:177] * Deleting "kubenet-055000" in qemu2 ...
	W0722 04:29:08.051041    6042 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:08.051073    6042 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:13.053263    6042 start.go:360] acquireMachinesLock for kubenet-055000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:13.053808    6042 start.go:364] duration metric: took 385.625µs to acquireMachinesLock for "kubenet-055000"
	I0722 04:29:13.053945    6042 start.go:93] Provisioning new machine with config: &{Name:kubenet-055000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:kubenet-055000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:13.054171    6042 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:13.062792    6042 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0722 04:29:13.102417    6042 start.go:159] libmachine.API.Create for "kubenet-055000" (driver="qemu2")
	I0722 04:29:13.102471    6042 client.go:168] LocalClient.Create starting
	I0722 04:29:13.102609    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:13.102669    6042 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:13.102686    6042 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:13.102741    6042 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:13.102782    6042 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:13.102792    6042 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:13.103225    6042 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:13.248503    6042 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:13.356202    6042 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:13.356209    6042 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:13.356429    6042 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2
	I0722 04:29:13.365475    6042 main.go:141] libmachine: STDOUT: 
	I0722 04:29:13.365495    6042 main.go:141] libmachine: STDERR: 
	I0722 04:29:13.365562    6042 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2 +20000M
	I0722 04:29:13.373536    6042 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:13.373549    6042 main.go:141] libmachine: STDERR: 
	I0722 04:29:13.373566    6042 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2
	I0722 04:29:13.373570    6042 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:13.373581    6042 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:13.373605    6042 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:ff:07:f0:ab:74 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/kubenet-055000/disk.qcow2
	I0722 04:29:13.375276    6042 main.go:141] libmachine: STDOUT: 
	I0722 04:29:13.375293    6042 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:13.375305    6042 client.go:171] duration metric: took 272.833167ms to LocalClient.Create
	I0722 04:29:15.377615    6042 start.go:128] duration metric: took 2.323446625s to createHost
	I0722 04:29:15.377707    6042 start.go:83] releasing machines lock for "kubenet-055000", held for 2.323913875s
	W0722 04:29:15.378020    6042 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-055000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:15.386576    6042 out.go:177] 
	W0722 04:29:15.392755    6042 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:15.392806    6042 out.go:239] * 
	* 
	W0722 04:29:15.395490    6042 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:15.403600    6042 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-765000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-765000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.776237s)

                                                
                                                
-- stdout --
	* [old-k8s-version-765000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-765000" primary control-plane node in "old-k8s-version-765000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-765000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:17.593436    6159 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:17.593560    6159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:17.593563    6159 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:17.593565    6159 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:17.593693    6159 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:17.594781    6159 out.go:298] Setting JSON to false
	I0722 04:29:17.611060    6159 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5326,"bootTime":1721642431,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:17.611132    6159 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:17.615843    6159 out.go:177] * [old-k8s-version-765000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:17.623897    6159 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:17.623947    6159 notify.go:220] Checking for updates...
	I0722 04:29:17.629871    6159 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:17.632833    6159 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:17.635896    6159 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:17.638828    6159 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:17.641908    6159 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:17.645202    6159 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:17.645271    6159 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:29:17.645329    6159 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:17.648803    6159 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:29:17.655735    6159 start.go:297] selected driver: qemu2
	I0722 04:29:17.655740    6159 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:29:17.655746    6159 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:17.657913    6159 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:29:17.661849    6159 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:29:17.664892    6159 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:17.664909    6159 cni.go:84] Creating CNI manager for ""
	I0722 04:29:17.664916    6159 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0722 04:29:17.664954    6159 start.go:340] cluster config:
	{Name:old-k8s-version-765000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:17.668644    6159 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:17.676878    6159 out.go:177] * Starting "old-k8s-version-765000" primary control-plane node in "old-k8s-version-765000" cluster
	I0722 04:29:17.680799    6159 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 04:29:17.680813    6159 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0722 04:29:17.680825    6159 cache.go:56] Caching tarball of preloaded images
	I0722 04:29:17.680880    6159 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:29:17.680886    6159 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0722 04:29:17.680945    6159 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/old-k8s-version-765000/config.json ...
	I0722 04:29:17.680958    6159 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/old-k8s-version-765000/config.json: {Name:mk7d87902a35306405c429c6c133c8e2610cb59a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:29:17.681169    6159 start.go:360] acquireMachinesLock for old-k8s-version-765000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:17.681207    6159 start.go:364] duration metric: took 28.041µs to acquireMachinesLock for "old-k8s-version-765000"
	I0722 04:29:17.681218    6159 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:17.681250    6159 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:17.687784    6159 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:17.705425    6159 start.go:159] libmachine.API.Create for "old-k8s-version-765000" (driver="qemu2")
	I0722 04:29:17.705461    6159 client.go:168] LocalClient.Create starting
	I0722 04:29:17.705530    6159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:17.705565    6159 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:17.705575    6159 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:17.705620    6159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:17.705642    6159 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:17.705652    6159 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:17.706013    6159 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:17.847070    6159 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:17.939728    6159 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:17.939735    6159 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:17.939956    6159 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:17.949126    6159 main.go:141] libmachine: STDOUT: 
	I0722 04:29:17.949144    6159 main.go:141] libmachine: STDERR: 
	I0722 04:29:17.949197    6159 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2 +20000M
	I0722 04:29:17.957035    6159 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:17.957050    6159 main.go:141] libmachine: STDERR: 
	I0722 04:29:17.957063    6159 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:17.957067    6159 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:17.957079    6159 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:17.957107    6159 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:25:0f:6f:2d:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:17.958697    6159 main.go:141] libmachine: STDOUT: 
	I0722 04:29:17.958713    6159 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:17.958731    6159 client.go:171] duration metric: took 253.270625ms to LocalClient.Create
	I0722 04:29:19.960980    6159 start.go:128] duration metric: took 2.279738959s to createHost
	I0722 04:29:19.961061    6159 start.go:83] releasing machines lock for "old-k8s-version-765000", held for 2.279881708s
	W0722 04:29:19.961142    6159 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:19.971529    6159 out.go:177] * Deleting "old-k8s-version-765000" in qemu2 ...
	W0722 04:29:19.997534    6159 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:19.997567    6159 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:24.999707    6159 start.go:360] acquireMachinesLock for old-k8s-version-765000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:25.000143    6159 start.go:364] duration metric: took 323.334µs to acquireMachinesLock for "old-k8s-version-765000"
	I0722 04:29:25.000250    6159 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:25.000551    6159 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:25.009963    6159 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:25.042642    6159 start.go:159] libmachine.API.Create for "old-k8s-version-765000" (driver="qemu2")
	I0722 04:29:25.042686    6159 client.go:168] LocalClient.Create starting
	I0722 04:29:25.042784    6159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:25.042841    6159 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:25.042860    6159 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:25.042918    6159 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:25.042961    6159 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:25.042975    6159 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:25.043369    6159 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:25.187919    6159 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:25.276927    6159 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:25.276942    6159 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:25.277546    6159 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:25.286756    6159 main.go:141] libmachine: STDOUT: 
	I0722 04:29:25.286774    6159 main.go:141] libmachine: STDERR: 
	I0722 04:29:25.286828    6159 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2 +20000M
	I0722 04:29:25.294848    6159 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:25.294863    6159 main.go:141] libmachine: STDERR: 
	I0722 04:29:25.294877    6159 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:25.294882    6159 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:25.294891    6159 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:25.294916    6159 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d1:c1:44:a3:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:25.296628    6159 main.go:141] libmachine: STDOUT: 
	I0722 04:29:25.296643    6159 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:25.296654    6159 client.go:171] duration metric: took 253.966625ms to LocalClient.Create
	I0722 04:29:27.298836    6159 start.go:128] duration metric: took 2.298293291s to createHost
	I0722 04:29:27.298909    6159 start.go:83] releasing machines lock for "old-k8s-version-765000", held for 2.298784459s
	W0722 04:29:27.299324    6159 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-765000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-765000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:27.307913    6159 out.go:177] 
	W0722 04:29:27.314064    6159 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:27.314092    6159 out.go:239] * 
	* 
	W0722 04:29:27.316922    6159 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:27.325932    6159 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-765000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (56.469375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-765000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-765000 create -f testdata/busybox.yaml: exit status 1 (29.282ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-765000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-765000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (29.270084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (28.682667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-765000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-765000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-765000 describe deploy/metrics-server -n kube-system: exit status 1 (27.250792ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-765000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-765000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (29.331667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-765000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-765000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.186008833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-765000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-765000" primary control-plane node in "old-k8s-version-765000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-765000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-765000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:31.191525    6219 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:31.191655    6219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:31.191658    6219 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:31.191661    6219 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:31.191796    6219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:31.192867    6219 out.go:298] Setting JSON to false
	I0722 04:29:31.209271    6219 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5340,"bootTime":1721642431,"procs":457,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:31.209346    6219 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:31.213202    6219 out.go:177] * [old-k8s-version-765000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:31.220156    6219 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:31.220218    6219 notify.go:220] Checking for updates...
	I0722 04:29:31.226093    6219 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:31.229081    6219 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:31.232087    6219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:31.235114    6219 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:31.238052    6219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:31.241295    6219 config.go:182] Loaded profile config "old-k8s-version-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0722 04:29:31.244043    6219 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0722 04:29:31.247032    6219 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:31.251086    6219 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:29:31.257014    6219 start.go:297] selected driver: qemu2
	I0722 04:29:31.257021    6219 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-765000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:31.257096    6219 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:31.259301    6219 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:31.259320    6219 cni.go:84] Creating CNI manager for ""
	I0722 04:29:31.259326    6219 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0722 04:29:31.259350    6219 start.go:340] cluster config:
	{Name:old-k8s-version-765000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-765000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:31.262660    6219 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:31.271057    6219 out.go:177] * Starting "old-k8s-version-765000" primary control-plane node in "old-k8s-version-765000" cluster
	I0722 04:29:31.275044    6219 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 04:29:31.275055    6219 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0722 04:29:31.275064    6219 cache.go:56] Caching tarball of preloaded images
	I0722 04:29:31.275113    6219 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:29:31.275117    6219 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0722 04:29:31.275169    6219 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/old-k8s-version-765000/config.json ...
	I0722 04:29:31.275564    6219 start.go:360] acquireMachinesLock for old-k8s-version-765000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:31.275599    6219 start.go:364] duration metric: took 29.584µs to acquireMachinesLock for "old-k8s-version-765000"
	I0722 04:29:31.275607    6219 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:29:31.275612    6219 fix.go:54] fixHost starting: 
	I0722 04:29:31.275720    6219 fix.go:112] recreateIfNeeded on old-k8s-version-765000: state=Stopped err=<nil>
	W0722 04:29:31.275728    6219 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:29:31.279024    6219 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-765000" ...
	I0722 04:29:31.286101    6219 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:31.286143    6219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d1:c1:44:a3:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:31.287918    6219 main.go:141] libmachine: STDOUT: 
	I0722 04:29:31.287934    6219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:31.287960    6219 fix.go:56] duration metric: took 12.348125ms for fixHost
	I0722 04:29:31.287964    6219 start.go:83] releasing machines lock for "old-k8s-version-765000", held for 12.36175ms
	W0722 04:29:31.287970    6219 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:31.287999    6219 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:31.288003    6219 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:36.290223    6219 start.go:360] acquireMachinesLock for old-k8s-version-765000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:36.290844    6219 start.go:364] duration metric: took 448.625µs to acquireMachinesLock for "old-k8s-version-765000"
	I0722 04:29:36.291019    6219 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:29:36.291041    6219 fix.go:54] fixHost starting: 
	I0722 04:29:36.291772    6219 fix.go:112] recreateIfNeeded on old-k8s-version-765000: state=Stopped err=<nil>
	W0722 04:29:36.291802    6219 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:29:36.300368    6219 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-765000" ...
	I0722 04:29:36.303344    6219 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:36.303510    6219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:d1:c1:44:a3:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/old-k8s-version-765000/disk.qcow2
	I0722 04:29:36.313610    6219 main.go:141] libmachine: STDOUT: 
	I0722 04:29:36.313670    6219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:36.313743    6219 fix.go:56] duration metric: took 22.703417ms for fixHost
	I0722 04:29:36.313758    6219 start.go:83] releasing machines lock for "old-k8s-version-765000", held for 22.885792ms
	W0722 04:29:36.313920    6219 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-765000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-765000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:36.322393    6219 out.go:177] 
	W0722 04:29:36.326469    6219 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:36.326496    6219 out.go:239] * 
	* 
	W0722 04:29:36.329119    6219 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:36.336375    6219 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-765000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (64.104334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-765000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (30.93975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-765000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-765000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-765000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (25.9755ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-765000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-765000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (28.783375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-765000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (28.441375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-765000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-765000 --alsologtostderr -v=1: exit status 83 (48.095167ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-765000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-765000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:36.597368    6240 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:36.598345    6240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:36.598350    6240 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:36.598353    6240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:36.598507    6240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:36.598712    6240 out.go:298] Setting JSON to false
	I0722 04:29:36.598720    6240 mustload.go:65] Loading cluster: old-k8s-version-765000
	I0722 04:29:36.598915    6240 config.go:182] Loaded profile config "old-k8s-version-765000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0722 04:29:36.610430    6240 out.go:177] * The control-plane node old-k8s-version-765000 host is not running: state=Stopped
	I0722 04:29:36.613735    6240 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-765000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-765000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (28.232333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (28.071417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-765000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-239000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-239000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.859132125s)

                                                
                                                
-- stdout --
	* [no-preload-239000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-239000" primary control-plane node in "no-preload-239000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-239000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:36.916448    6257 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:36.916582    6257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:36.916586    6257 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:36.916588    6257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:36.916725    6257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:36.917949    6257 out.go:298] Setting JSON to false
	I0722 04:29:36.934861    6257 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5345,"bootTime":1721642431,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:36.934930    6257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:36.940204    6257 out.go:177] * [no-preload-239000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:36.947040    6257 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:36.947110    6257 notify.go:220] Checking for updates...
	I0722 04:29:36.955178    6257 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:36.958130    6257 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:36.961172    6257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:36.964163    6257 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:36.965500    6257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:36.968423    6257 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:36.968482    6257 config.go:182] Loaded profile config "stopped-upgrade-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0722 04:29:36.968536    6257 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:36.973118    6257 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:29:36.978151    6257 start.go:297] selected driver: qemu2
	I0722 04:29:36.978158    6257 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:29:36.978165    6257 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:36.980536    6257 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:29:36.984185    6257 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:29:36.988119    6257 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:36.988137    6257 cni.go:84] Creating CNI manager for ""
	I0722 04:29:36.988143    6257 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:29:36.988147    6257 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:29:36.988178    6257 start.go:340] cluster config:
	{Name:no-preload-239000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:36.992119    6257 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:36.999188    6257 out.go:177] * Starting "no-preload-239000" primary control-plane node in "no-preload-239000" cluster
	I0722 04:29:37.003119    6257 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 04:29:37.003210    6257 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/no-preload-239000/config.json ...
	I0722 04:29:37.003237    6257 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/no-preload-239000/config.json: {Name:mk859afc8eda0df9c535bdbca6f9bc00806cd54f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:29:37.003247    6257 cache.go:107] acquiring lock: {Name:mk0a4a038b81605f387adfa4e74fec8a71c61136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003239    6257 cache.go:107] acquiring lock: {Name:mk09911afc62c2f34d1e1b4af7a153172dc11435 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003283    6257 cache.go:107] acquiring lock: {Name:mkdc43c2ce4b3617a2322e1f2e26a549e845e50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003306    6257 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0722 04:29:37.003315    6257 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.75µs
	I0722 04:29:37.003325    6257 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0722 04:29:37.003339    6257 cache.go:107] acquiring lock: {Name:mkc7fafac2da8e8e4632674ce2fa91f5eb97c105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003437    6257 cache.go:107] acquiring lock: {Name:mk1ed40748bd4c600eced4790013fe081c2a5b4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003419    6257 cache.go:107] acquiring lock: {Name:mk0d52026fd010672f19837dd13e299656667427 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003499    6257 cache.go:107] acquiring lock: {Name:mkfd29141897d0020349bdcd69bcc5686ae71600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003474    6257 cache.go:107] acquiring lock: {Name:mkce32af3aad2c8498cd6078177735187dfef164 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:37.003612    6257 start.go:360] acquireMachinesLock for no-preload-239000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:37.003621    6257 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.14-0
	I0722 04:29:37.003646    6257 start.go:364] duration metric: took 28µs to acquireMachinesLock for "no-preload-239000"
	I0722 04:29:37.003657    6257 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 04:29:37.003657    6257 start.go:93] Provisioning new machine with config: &{Name:no-preload-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:37.003694    6257 image.go:134] retrieving image: registry.k8s.io/pause:3.10
	I0722 04:29:37.003655    6257 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 04:29:37.003717    6257 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 04:29:37.003695    6257 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:37.003747    6257 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 04:29:37.003842    6257 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 04:29:37.011087    6257 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:37.014855    6257 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0722 04:29:37.014863    6257 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	I0722 04:29:37.014982    6257 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	I0722 04:29:37.015089    6257 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0-beta.0
	I0722 04:29:37.015325    6257 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	I0722 04:29:37.015698    6257 image.go:177] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0722 04:29:37.015729    6257 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.14-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.14-0
	I0722 04:29:37.027994    6257 start.go:159] libmachine.API.Create for "no-preload-239000" (driver="qemu2")
	I0722 04:29:37.028026    6257 client.go:168] LocalClient.Create starting
	I0722 04:29:37.028160    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:37.028199    6257 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:37.028208    6257 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:37.028258    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:37.028283    6257 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:37.028290    6257 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:37.028654    6257 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:37.174579    6257 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:37.316037    6257 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:37.316056    6257 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:37.316290    6257 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:37.325650    6257 main.go:141] libmachine: STDOUT: 
	I0722 04:29:37.325666    6257 main.go:141] libmachine: STDERR: 
	I0722 04:29:37.325720    6257 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2 +20000M
	I0722 04:29:37.333886    6257 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:37.333901    6257 main.go:141] libmachine: STDERR: 
	I0722 04:29:37.333911    6257 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:37.333917    6257 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:37.333934    6257 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:37.333976    6257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:2c:b1:bd:5b:5b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:37.335696    6257 main.go:141] libmachine: STDOUT: 
	I0722 04:29:37.335711    6257 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:37.335729    6257 client.go:171] duration metric: took 307.7045ms to LocalClient.Create
	I0722 04:29:39.157271    6257 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0
	I0722 04:29:39.329847    6257 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0722 04:29:39.332026    6257 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0722 04:29:39.335092    6257 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0
	I0722 04:29:39.336336    6257 start.go:128] duration metric: took 2.332633416s to createHost
	I0722 04:29:39.336359    6257 start.go:83] releasing machines lock for "no-preload-239000", held for 2.332744709s
	W0722 04:29:39.336423    6257 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:39.346214    6257 out.go:177] * Deleting "no-preload-239000" in qemu2 ...
	W0722 04:29:39.375497    6257 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:39.375529    6257 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:39.480598    6257 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0722 04:29:39.480617    6257 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 2.477288209s
	I0722 04:29:39.480626    6257 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0722 04:29:39.906538    6257 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0
	I0722 04:29:39.915790    6257 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0
	I0722 04:29:39.916647    6257 cache.go:162] opening:  /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0
	I0722 04:29:42.285609    6257 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0722 04:29:42.285682    6257 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 5.282507792s
	I0722 04:29:42.285712    6257 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0722 04:29:42.364068    6257 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0722 04:29:42.364124    6257 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.3607615s
	I0722 04:29:42.364158    6257 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0722 04:29:42.407857    6257 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0722 04:29:42.407900    6257 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 5.40458325s
	I0722 04:29:42.407921    6257 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0722 04:29:43.118876    6257 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0722 04:29:43.118926    6257 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 6.115592959s
	I0722 04:29:43.118950    6257 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0722 04:29:43.161310    6257 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0722 04:29:43.161352    6257 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 6.1582315s
	I0722 04:29:43.161374    6257 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0722 04:29:44.375988    6257 start.go:360] acquireMachinesLock for no-preload-239000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:44.376454    6257 start.go:364] duration metric: took 388.458µs to acquireMachinesLock for "no-preload-239000"
	I0722 04:29:44.376592    6257 start.go:93] Provisioning new machine with config: &{Name:no-preload-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:44.376816    6257 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:44.383530    6257 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:44.432852    6257 start.go:159] libmachine.API.Create for "no-preload-239000" (driver="qemu2")
	I0722 04:29:44.432912    6257 client.go:168] LocalClient.Create starting
	I0722 04:29:44.433033    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:44.433099    6257 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:44.433123    6257 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:44.433198    6257 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:44.433242    6257 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:44.433255    6257 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:44.433768    6257 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:44.589788    6257 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:44.688126    6257 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:44.688132    6257 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:44.688332    6257 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:44.697823    6257 main.go:141] libmachine: STDOUT: 
	I0722 04:29:44.697857    6257 main.go:141] libmachine: STDERR: 
	I0722 04:29:44.697917    6257 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2 +20000M
	I0722 04:29:44.706089    6257 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:44.706107    6257 main.go:141] libmachine: STDERR: 
	I0722 04:29:44.706126    6257 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:44.706129    6257 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:44.706138    6257 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:44.706192    6257 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:0a:6c:dd:d8:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:44.707918    6257 main.go:141] libmachine: STDOUT: 
	I0722 04:29:44.707935    6257 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:44.707947    6257 client.go:171] duration metric: took 275.034208ms to LocalClient.Create
	I0722 04:29:46.494578    6257 cache.go:157] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0722 04:29:46.494639    6257 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 9.491452s
	I0722 04:29:46.494683    6257 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0722 04:29:46.494739    6257 cache.go:87] Successfully saved all images to host disk.
	I0722 04:29:46.710131    6257 start.go:128] duration metric: took 2.333326125s to createHost
	I0722 04:29:46.710165    6257 start.go:83] releasing machines lock for "no-preload-239000", held for 2.333724709s
	W0722 04:29:46.710448    6257 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-239000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:46.718494    6257 out.go:177] 
	W0722 04:29:46.721542    6257 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:46.721576    6257 out.go:239] * 
	* 
	W0722 04:29:46.724390    6257 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:46.734383    6257 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-239000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (63.379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-660000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-660000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (10.363600542s)

                                                
                                                
-- stdout --
	* [embed-certs-660000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-660000" primary control-plane node in "embed-certs-660000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-660000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:38.829112    6298 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:38.829245    6298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:38.829248    6298 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:38.829254    6298 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:38.829407    6298 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:38.830524    6298 out.go:298] Setting JSON to false
	I0722 04:29:38.846515    6298 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5347,"bootTime":1721642431,"procs":454,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:38.846584    6298 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:38.850617    6298 out.go:177] * [embed-certs-660000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:38.858820    6298 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:38.858872    6298 notify.go:220] Checking for updates...
	I0722 04:29:38.865679    6298 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:38.868740    6298 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:38.872632    6298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:38.875689    6298 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:38.878713    6298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:38.881989    6298 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:38.882063    6298 config.go:182] Loaded profile config "no-preload-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:29:38.882112    6298 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:38.886697    6298 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:29:38.892737    6298 start.go:297] selected driver: qemu2
	I0722 04:29:38.892743    6298 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:29:38.892750    6298 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:38.895183    6298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:29:38.897655    6298 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:29:38.900755    6298 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:38.900787    6298 cni.go:84] Creating CNI manager for ""
	I0722 04:29:38.900795    6298 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:29:38.900803    6298 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:29:38.900833    6298 start.go:340] cluster config:
	{Name:embed-certs-660000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:38.904615    6298 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:38.912717    6298 out.go:177] * Starting "embed-certs-660000" primary control-plane node in "embed-certs-660000" cluster
	I0722 04:29:38.916752    6298 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:29:38.916767    6298 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:29:38.916779    6298 cache.go:56] Caching tarball of preloaded images
	I0722 04:29:38.916839    6298 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:29:38.916847    6298 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:29:38.916934    6298 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/embed-certs-660000/config.json ...
	I0722 04:29:38.916948    6298 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/embed-certs-660000/config.json: {Name:mk95b16bdc2fab1df2af2ae3a065d3b785606a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:29:38.917250    6298 start.go:360] acquireMachinesLock for embed-certs-660000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:39.337352    6298 start.go:364] duration metric: took 420.085833ms to acquireMachinesLock for "embed-certs-660000"
	I0722 04:29:39.337444    6298 start.go:93] Provisioning new machine with config: &{Name:embed-certs-660000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:39.337638    6298 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:39.353226    6298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:39.400859    6298 start.go:159] libmachine.API.Create for "embed-certs-660000" (driver="qemu2")
	I0722 04:29:39.400916    6298 client.go:168] LocalClient.Create starting
	I0722 04:29:39.401048    6298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:39.401102    6298 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:39.401117    6298 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:39.401193    6298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:39.401236    6298 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:39.401253    6298 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:39.401846    6298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:39.552739    6298 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:39.760828    6298 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:39.760839    6298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:39.761023    6298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:39.770504    6298 main.go:141] libmachine: STDOUT: 
	I0722 04:29:39.770520    6298 main.go:141] libmachine: STDERR: 
	I0722 04:29:39.770561    6298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2 +20000M
	I0722 04:29:39.778509    6298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:39.778523    6298 main.go:141] libmachine: STDERR: 
	I0722 04:29:39.778532    6298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:39.778536    6298 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:39.778548    6298 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:39.778581    6298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:74:80:e7:8a:89 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:39.780213    6298 main.go:141] libmachine: STDOUT: 
	I0722 04:29:39.780227    6298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:39.780244    6298 client.go:171] duration metric: took 379.328708ms to LocalClient.Create
	I0722 04:29:41.782465    6298 start.go:128] duration metric: took 2.444825375s to createHost
	I0722 04:29:41.782550    6298 start.go:83] releasing machines lock for "embed-certs-660000", held for 2.445185041s
	W0722 04:29:41.782696    6298 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:41.790836    6298 out.go:177] * Deleting "embed-certs-660000" in qemu2 ...
	W0722 04:29:41.817596    6298 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:41.817635    6298 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:46.819248    6298 start.go:360] acquireMachinesLock for embed-certs-660000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:46.819325    6298 start.go:364] duration metric: took 58.209µs to acquireMachinesLock for "embed-certs-660000"
	I0722 04:29:46.819355    6298 start.go:93] Provisioning new machine with config: &{Name:embed-certs-660000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.3 ClusterName:embed-certs-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:46.819427    6298 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:46.827536    6298 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:46.844133    6298 start.go:159] libmachine.API.Create for "embed-certs-660000" (driver="qemu2")
	I0722 04:29:46.844161    6298 client.go:168] LocalClient.Create starting
	I0722 04:29:46.844227    6298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:46.844253    6298 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:46.844260    6298 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:46.844298    6298 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:46.844314    6298 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:46.844320    6298 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:46.844682    6298 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:47.028501    6298 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:47.114696    6298 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:47.114703    6298 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:47.114874    6298 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:47.123788    6298 main.go:141] libmachine: STDOUT: 
	I0722 04:29:47.123808    6298 main.go:141] libmachine: STDERR: 
	I0722 04:29:47.123867    6298 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2 +20000M
	I0722 04:29:47.131850    6298 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:47.131868    6298 main.go:141] libmachine: STDERR: 
	I0722 04:29:47.131878    6298 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:47.131885    6298 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:47.131894    6298 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:47.131924    6298 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:20:65:8e:d5:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:47.133689    6298 main.go:141] libmachine: STDOUT: 
	I0722 04:29:47.133706    6298 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:47.133720    6298 client.go:171] duration metric: took 289.560125ms to LocalClient.Create
	I0722 04:29:49.134850    6298 start.go:128] duration metric: took 2.315447625s to createHost
	I0722 04:29:49.134926    6298 start.go:83] releasing machines lock for "embed-certs-660000", held for 2.315628s
	W0722 04:29:49.135334    6298 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-660000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:49.142985    6298 out.go:177] 
	W0722 04:29:49.146971    6298 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:49.146978    6298 out.go:239] * 
	* 
	W0722 04:29:49.147566    6298 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:49.156990    6298 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-660000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (34.160917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-239000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-239000 create -f testdata/busybox.yaml: exit status 1 (30.516209ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-239000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-239000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (33.133708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (32.818291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-239000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-239000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-239000 describe deploy/metrics-server -n kube-system: exit status 1 (30.122708ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-239000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-239000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (29.910583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-660000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-660000 create -f testdata/busybox.yaml: exit status 1 (27.312833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-660000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-660000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (29.718291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (33.549875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-239000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-239000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.198783958s)

                                                
                                                
-- stdout --
	* [no-preload-239000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-239000" primary control-plane node in "no-preload-239000" cluster
	* Restarting existing qemu2 VM for "no-preload-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-239000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:49.251996    6355 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:49.252152    6355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:49.252155    6355 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:49.252157    6355 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:49.252291    6355 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:49.253299    6355 out.go:298] Setting JSON to false
	I0722 04:29:49.270893    6355 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5358,"bootTime":1721642431,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:49.270970    6355 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:49.274007    6355 out.go:177] * [no-preload-239000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:49.281015    6355 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:49.281036    6355 notify.go:220] Checking for updates...
	I0722 04:29:49.290962    6355 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:49.298966    6355 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:49.303126    6355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:49.306823    6355 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:49.309949    6355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:49.313272    6355 config.go:182] Loaded profile config "no-preload-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:29:49.313530    6355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:49.314945    6355 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:29:49.321983    6355 start.go:297] selected driver: qemu2
	I0722 04:29:49.321991    6355 start.go:901] validating driver "qemu2" against &{Name:no-preload-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-239000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:49.322061    6355 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:49.324283    6355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:49.324305    6355 cni.go:84] Creating CNI manager for ""
	I0722 04:29:49.324314    6355 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:29:49.324364    6355 start.go:340] cluster config:
	{Name:no-preload-239000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:no-preload-239000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:49.327396    6355 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.337017    6355 out.go:177] * Starting "no-preload-239000" primary control-plane node in "no-preload-239000" cluster
	I0722 04:29:49.341009    6355 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 04:29:49.341068    6355 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/no-preload-239000/config.json ...
	I0722 04:29:49.341080    6355 cache.go:107] acquiring lock: {Name:mk0a4a038b81605f387adfa4e74fec8a71c61136 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341080    6355 cache.go:107] acquiring lock: {Name:mk09911afc62c2f34d1e1b4af7a153172dc11435 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341111    6355 cache.go:107] acquiring lock: {Name:mkdc43c2ce4b3617a2322e1f2e26a549e845e50c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341135    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0722 04:29:49.341137    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 exists
	I0722 04:29:49.341140    6355 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.208µs
	I0722 04:29:49.341143    6355 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0" took 68.833µs
	I0722 04:29:49.341146    6355 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0722 04:29:49.341147    6355 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0-beta.0 succeeded
	I0722 04:29:49.341145    6355 cache.go:107] acquiring lock: {Name:mk0d52026fd010672f19837dd13e299656667427 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341153    6355 cache.go:107] acquiring lock: {Name:mkc7fafac2da8e8e4632674ce2fa91f5eb97c105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341155    6355 cache.go:107] acquiring lock: {Name:mkfd29141897d0020349bdcd69bcc5686ae71600 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341165    6355 cache.go:107] acquiring lock: {Name:mkce32af3aad2c8498cd6078177735187dfef164 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341177    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 exists
	I0722 04:29:49.341185    6355 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0" took 93.667µs
	I0722 04:29:49.341189    6355 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0-beta.0 succeeded
	I0722 04:29:49.341187    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 exists
	I0722 04:29:49.341195    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0722 04:29:49.341204    6355 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 51.375µs
	I0722 04:29:49.341209    6355 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0722 04:29:49.341198    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 exists
	I0722 04:29:49.341213    6355 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0" took 48.375µs
	I0722 04:29:49.341215    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0722 04:29:49.341216    6355 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0-beta.0 succeeded
	I0722 04:29:49.341197    6355 cache.go:96] cache image "registry.k8s.io/etcd:3.5.14-0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0" took 45.084µs
	I0722 04:29:49.341219    6355 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 91.416µs
	I0722 04:29:49.341222    6355 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0722 04:29:49.341220    6355 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.14-0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.14-0 succeeded
	I0722 04:29:49.341229    6355 cache.go:107] acquiring lock: {Name:mk1ed40748bd4c600eced4790013fe081c2a5b4f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:49.341270    6355 cache.go:115] /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 exists
	I0722 04:29:49.341274    6355 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0-beta.0" -> "/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0" took 57.292µs
	I0722 04:29:49.341280    6355 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0-beta.0 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0-beta.0 succeeded
	I0722 04:29:49.341284    6355 cache.go:87] Successfully saved all images to host disk.
	I0722 04:29:49.341548    6355 start.go:360] acquireMachinesLock for no-preload-239000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:49.341575    6355 start.go:364] duration metric: took 21.833µs to acquireMachinesLock for "no-preload-239000"
	I0722 04:29:49.341582    6355 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:29:49.341587    6355 fix.go:54] fixHost starting: 
	I0722 04:29:49.341689    6355 fix.go:112] recreateIfNeeded on no-preload-239000: state=Stopped err=<nil>
	W0722 04:29:49.341697    6355 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:29:49.351977    6355 out.go:177] * Restarting existing qemu2 VM for "no-preload-239000" ...
	I0722 04:29:49.356043    6355 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:49.356086    6355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:0a:6c:dd:d8:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:49.358043    6355 main.go:141] libmachine: STDOUT: 
	I0722 04:29:49.358064    6355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:49.358092    6355 fix.go:56] duration metric: took 16.504083ms for fixHost
	I0722 04:29:49.358095    6355 start.go:83] releasing machines lock for "no-preload-239000", held for 16.516375ms
	W0722 04:29:49.358102    6355 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:49.358131    6355 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:49.358136    6355 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:54.360275    6355 start.go:360] acquireMachinesLock for no-preload-239000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:54.360602    6355 start.go:364] duration metric: took 247.041µs to acquireMachinesLock for "no-preload-239000"
	I0722 04:29:54.360701    6355 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:29:54.360721    6355 fix.go:54] fixHost starting: 
	I0722 04:29:54.361431    6355 fix.go:112] recreateIfNeeded on no-preload-239000: state=Stopped err=<nil>
	W0722 04:29:54.361462    6355 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:29:54.369819    6355 out.go:177] * Restarting existing qemu2 VM for "no-preload-239000" ...
	I0722 04:29:54.373823    6355 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:54.373989    6355 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:0a:6c:dd:d8:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/no-preload-239000/disk.qcow2
	I0722 04:29:54.382898    6355 main.go:141] libmachine: STDOUT: 
	I0722 04:29:54.382977    6355 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:54.383055    6355 fix.go:56] duration metric: took 22.334042ms for fixHost
	I0722 04:29:54.383072    6355 start.go:83] releasing machines lock for "no-preload-239000", held for 22.4475ms
	W0722 04:29:54.383245    6355 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-239000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:54.391752    6355 out.go:177] 
	W0722 04:29:54.395825    6355 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:54.395861    6355 out.go:239] * 
	* 
	W0722 04:29:54.398228    6355 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:54.406910    6355 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-239000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (65.500625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-660000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-660000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-660000 describe deploy/metrics-server -n kube-system: exit status 1 (27.377625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-660000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-660000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (28.999875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-660000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-660000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (5.18616875s)

                                                
                                                
-- stdout --
	* [embed-certs-660000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-660000" primary control-plane node in "embed-certs-660000" cluster
	* Restarting existing qemu2 VM for "embed-certs-660000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-660000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:52.943560    6393 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:52.943713    6393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:52.943716    6393 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:52.943719    6393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:52.943846    6393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:52.944858    6393 out.go:298] Setting JSON to false
	I0722 04:29:52.960746    6393 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5361,"bootTime":1721642431,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:52.960819    6393 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:52.966133    6393 out.go:177] * [embed-certs-660000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:52.972964    6393 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:52.973021    6393 notify.go:220] Checking for updates...
	I0722 04:29:52.979939    6393 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:52.983004    6393 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:52.986049    6393 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:52.989038    6393 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:52.992013    6393 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:52.995327    6393 config.go:182] Loaded profile config "embed-certs-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:52.995580    6393 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:52.999048    6393 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:29:53.006072    6393 start.go:297] selected driver: qemu2
	I0722 04:29:53.006078    6393 start.go:901] validating driver "qemu2" against &{Name:embed-certs-660000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:embed-certs-660000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:53.006133    6393 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:53.008336    6393 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:53.008361    6393 cni.go:84] Creating CNI manager for ""
	I0722 04:29:53.008369    6393 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:29:53.008403    6393 start.go:340] cluster config:
	{Name:embed-certs-660000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-660000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:53.011799    6393 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:53.020065    6393 out.go:177] * Starting "embed-certs-660000" primary control-plane node in "embed-certs-660000" cluster
	I0722 04:29:53.023899    6393 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:29:53.023914    6393 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:29:53.023924    6393 cache.go:56] Caching tarball of preloaded images
	I0722 04:29:53.023988    6393 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:29:53.023994    6393 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:29:53.024056    6393 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/embed-certs-660000/config.json ...
	I0722 04:29:53.024477    6393 start.go:360] acquireMachinesLock for embed-certs-660000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:53.024507    6393 start.go:364] duration metric: took 23.667µs to acquireMachinesLock for "embed-certs-660000"
	I0722 04:29:53.024515    6393 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:29:53.024521    6393 fix.go:54] fixHost starting: 
	I0722 04:29:53.024637    6393 fix.go:112] recreateIfNeeded on embed-certs-660000: state=Stopped err=<nil>
	W0722 04:29:53.024645    6393 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:29:53.033021    6393 out.go:177] * Restarting existing qemu2 VM for "embed-certs-660000" ...
	I0722 04:29:53.036968    6393 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:53.037006    6393 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:20:65:8e:d5:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:53.039010    6393 main.go:141] libmachine: STDOUT: 
	I0722 04:29:53.039031    6393 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:53.039062    6393 fix.go:56] duration metric: took 14.540375ms for fixHost
	I0722 04:29:53.039067    6393 start.go:83] releasing machines lock for "embed-certs-660000", held for 14.555542ms
	W0722 04:29:53.039073    6393 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:53.039130    6393 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:53.039136    6393 start.go:729] Will try again in 5 seconds ...
	I0722 04:29:58.041268    6393 start.go:360] acquireMachinesLock for embed-certs-660000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:58.041652    6393 start.go:364] duration metric: took 309.375µs to acquireMachinesLock for "embed-certs-660000"
	I0722 04:29:58.041760    6393 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:29:58.041781    6393 fix.go:54] fixHost starting: 
	I0722 04:29:58.042556    6393 fix.go:112] recreateIfNeeded on embed-certs-660000: state=Stopped err=<nil>
	W0722 04:29:58.042584    6393 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:29:58.056422    6393 out.go:177] * Restarting existing qemu2 VM for "embed-certs-660000" ...
	I0722 04:29:58.059265    6393 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:58.059501    6393 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:20:65:8e:d5:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/embed-certs-660000/disk.qcow2
	I0722 04:29:58.068528    6393 main.go:141] libmachine: STDOUT: 
	I0722 04:29:58.068608    6393 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:58.068707    6393 fix.go:56] duration metric: took 26.924834ms for fixHost
	I0722 04:29:58.068727    6393 start.go:83] releasing machines lock for "embed-certs-660000", held for 27.051417ms
	W0722 04:29:58.068936    6393 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-660000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:58.076210    6393 out.go:177] 
	W0722 04:29:58.079339    6393 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:29:58.079361    6393 out.go:239] * 
	* 
	W0722 04:29:58.082070    6393 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:29:58.090110    6393 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-660000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (67.25575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-239000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (32.154542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-239000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-239000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-239000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.987375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-239000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-239000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (28.595917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-239000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (29.661666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-239000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-239000 --alsologtostderr -v=1: exit status 83 (39.938708ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-239000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-239000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:54.672445    6412 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:54.672603    6412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:54.672606    6412 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:54.672608    6412 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:54.672728    6412 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:54.672946    6412 out.go:298] Setting JSON to false
	I0722 04:29:54.672953    6412 mustload.go:65] Loading cluster: no-preload-239000
	I0722 04:29:54.673129    6412 config.go:182] Loaded profile config "no-preload-239000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:29:54.677992    6412 out.go:177] * The control-plane node no-preload-239000 host is not running: state=Stopped
	I0722 04:29:54.681025    6412 out.go:177]   To start a cluster, run: "minikube start -p no-preload-239000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-239000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (28.528833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (27.174083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-239000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-966000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-966000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (9.734706916s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-966000" primary control-plane node in "default-k8s-diff-port-966000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-966000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:55.082984    6438 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:55.083125    6438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:55.083128    6438 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:55.083131    6438 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:55.083241    6438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:55.084370    6438 out.go:298] Setting JSON to false
	I0722 04:29:55.100393    6438 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5364,"bootTime":1721642431,"procs":458,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:55.100465    6438 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:55.104973    6438 out.go:177] * [default-k8s-diff-port-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:55.112015    6438 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:55.112079    6438 notify.go:220] Checking for updates...
	I0722 04:29:55.118977    6438 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:55.122023    6438 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:55.124941    6438 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:55.128029    6438 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:55.130988    6438 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:55.134240    6438 config.go:182] Loaded profile config "embed-certs-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:55.134304    6438 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:55.134354    6438 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:55.139000    6438 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:29:55.145918    6438 start.go:297] selected driver: qemu2
	I0722 04:29:55.145924    6438 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:29:55.145929    6438 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:55.148325    6438 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:29:55.150995    6438 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:29:55.154018    6438 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:29:55.154034    6438 cni.go:84] Creating CNI manager for ""
	I0722 04:29:55.154041    6438 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:29:55.154045    6438 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:29:55.154073    6438 start.go:340] cluster config:
	{Name:default-k8s-diff-port-966000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:55.157806    6438 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:55.162000    6438 out.go:177] * Starting "default-k8s-diff-port-966000" primary control-plane node in "default-k8s-diff-port-966000" cluster
	I0722 04:29:55.164917    6438 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:29:55.164933    6438 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:29:55.164949    6438 cache.go:56] Caching tarball of preloaded images
	I0722 04:29:55.165020    6438 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:29:55.165027    6438 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:29:55.165117    6438 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/default-k8s-diff-port-966000/config.json ...
	I0722 04:29:55.165131    6438 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/default-k8s-diff-port-966000/config.json: {Name:mk6959220371cf6aa5aaafe0a37e922e96975a61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:29:55.165377    6438 start.go:360] acquireMachinesLock for default-k8s-diff-port-966000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:55.165412    6438 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "default-k8s-diff-port-966000"
	I0722 04:29:55.165423    6438 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:55.165453    6438 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:55.172923    6438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:55.190391    6438 start.go:159] libmachine.API.Create for "default-k8s-diff-port-966000" (driver="qemu2")
	I0722 04:29:55.190428    6438 client.go:168] LocalClient.Create starting
	I0722 04:29:55.190494    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:55.190526    6438 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:55.190536    6438 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:55.190573    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:55.190599    6438 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:55.190606    6438 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:55.191008    6438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:55.343885    6438 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:55.396229    6438 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:55.396234    6438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:55.396418    6438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:29:55.405524    6438 main.go:141] libmachine: STDOUT: 
	I0722 04:29:55.405541    6438 main.go:141] libmachine: STDERR: 
	I0722 04:29:55.405588    6438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2 +20000M
	I0722 04:29:55.413346    6438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:55.413359    6438 main.go:141] libmachine: STDERR: 
	I0722 04:29:55.413370    6438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:29:55.413379    6438 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:55.413393    6438 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:55.413415    6438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:83:e2:ee:44:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:29:55.414982    6438 main.go:141] libmachine: STDOUT: 
	I0722 04:29:55.414996    6438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:55.415013    6438 client.go:171] duration metric: took 224.585ms to LocalClient.Create
	I0722 04:29:57.417146    6438 start.go:128] duration metric: took 2.251714583s to createHost
	I0722 04:29:57.417205    6438 start.go:83] releasing machines lock for "default-k8s-diff-port-966000", held for 2.251818333s
	W0722 04:29:57.417331    6438 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:57.431472    6438 out.go:177] * Deleting "default-k8s-diff-port-966000" in qemu2 ...
	W0722 04:29:57.457057    6438 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:29:57.457084    6438 start.go:729] Will try again in 5 seconds ...
	I0722 04:30:02.459193    6438 start.go:360] acquireMachinesLock for default-k8s-diff-port-966000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:30:02.459497    6438 start.go:364] duration metric: took 236.75µs to acquireMachinesLock for "default-k8s-diff-port-966000"
	I0722 04:30:02.459576    6438 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:30:02.459691    6438 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:30:02.468266    6438 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:30:02.510169    6438 start.go:159] libmachine.API.Create for "default-k8s-diff-port-966000" (driver="qemu2")
	I0722 04:30:02.510234    6438 client.go:168] LocalClient.Create starting
	I0722 04:30:02.510382    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:30:02.510458    6438 main.go:141] libmachine: Decoding PEM data...
	I0722 04:30:02.510478    6438 main.go:141] libmachine: Parsing certificate...
	I0722 04:30:02.510554    6438 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:30:02.510606    6438 main.go:141] libmachine: Decoding PEM data...
	I0722 04:30:02.510619    6438 main.go:141] libmachine: Parsing certificate...
	I0722 04:30:02.511450    6438 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:30:02.670574    6438 main.go:141] libmachine: Creating SSH key...
	I0722 04:30:02.727207    6438 main.go:141] libmachine: Creating Disk image...
	I0722 04:30:02.727212    6438 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:30:02.727413    6438 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:30:02.736724    6438 main.go:141] libmachine: STDOUT: 
	I0722 04:30:02.736744    6438 main.go:141] libmachine: STDERR: 
	I0722 04:30:02.736805    6438 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2 +20000M
	I0722 04:30:02.744798    6438 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:30:02.744813    6438 main.go:141] libmachine: STDERR: 
	I0722 04:30:02.744823    6438 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:30:02.744829    6438 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:30:02.744843    6438 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:30:02.744872    6438 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:93:c7:3c:01:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:30:02.746371    6438 main.go:141] libmachine: STDOUT: 
	I0722 04:30:02.746387    6438 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:30:02.746400    6438 client.go:171] duration metric: took 236.163375ms to LocalClient.Create
	I0722 04:30:04.748563    6438 start.go:128] duration metric: took 2.28887775s to createHost
	I0722 04:30:04.748608    6438 start.go:83] releasing machines lock for "default-k8s-diff-port-966000", held for 2.28913025s
	W0722 04:30:04.748849    6438 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-966000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-966000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:04.756326    6438 out.go:177] 
	W0722 04:30:04.762471    6438 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:30:04.762531    6438 out.go:239] * 
	* 
	W0722 04:30:04.763839    6438 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:30:04.775348    6438 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-966000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (66.840458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-660000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (31.864917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-660000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-660000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-660000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.980125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-660000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-660000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (28.420084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-660000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (28.653375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-660000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-660000 --alsologtostderr -v=1: exit status 83 (39.803084ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-660000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-660000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:58.354620    6460 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:58.354772    6460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:58.354775    6460 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:58.354777    6460 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:58.354890    6460 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:58.355110    6460 out.go:298] Setting JSON to false
	I0722 04:29:58.355119    6460 mustload.go:65] Loading cluster: embed-certs-660000
	I0722 04:29:58.355294    6460 config.go:182] Loaded profile config "embed-certs-660000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:58.359186    6460 out.go:177] * The control-plane node embed-certs-660000 host is not running: state=Stopped
	I0722 04:29:58.363226    6460 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-660000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-660000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (28.072292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (28.452041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-660000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-206000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-206000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (9.811724541s)

                                                
                                                
-- stdout --
	* [newest-cni-206000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-206000" primary control-plane node in "newest-cni-206000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-206000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:29:58.665468    6477 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:29:58.665610    6477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:58.665613    6477 out.go:304] Setting ErrFile to fd 2...
	I0722 04:29:58.665616    6477 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:29:58.665741    6477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:29:58.666759    6477 out.go:298] Setting JSON to false
	I0722 04:29:58.682918    6477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5367,"bootTime":1721642431,"procs":455,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:29:58.682982    6477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:29:58.687145    6477 out.go:177] * [newest-cni-206000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:29:58.694150    6477 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:29:58.694197    6477 notify.go:220] Checking for updates...
	I0722 04:29:58.700151    6477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:29:58.703172    6477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:29:58.706183    6477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:29:58.709138    6477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:29:58.712165    6477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:29:58.713981    6477 config.go:182] Loaded profile config "default-k8s-diff-port-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:58.714042    6477 config.go:182] Loaded profile config "multinode-941000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:29:58.714091    6477 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:29:58.722131    6477 out.go:177] * Using the qemu2 driver based on user configuration
	I0722 04:29:58.728125    6477 start.go:297] selected driver: qemu2
	I0722 04:29:58.728130    6477 start.go:901] validating driver "qemu2" against <nil>
	I0722 04:29:58.728136    6477 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:29:58.730377    6477 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0722 04:29:58.730398    6477 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0722 04:29:58.738146    6477 out.go:177] * Automatically selected the socket_vmnet network
	I0722 04:29:58.741198    6477 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0722 04:29:58.741215    6477 cni.go:84] Creating CNI manager for ""
	I0722 04:29:58.741223    6477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:29:58.741228    6477 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:29:58.741260    6477 start.go:340] cluster config:
	{Name:newest-cni-206000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-206000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:29:58.744990    6477 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:29:58.752181    6477 out.go:177] * Starting "newest-cni-206000" primary control-plane node in "newest-cni-206000" cluster
	I0722 04:29:58.756188    6477 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 04:29:58.756203    6477 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0722 04:29:58.756217    6477 cache.go:56] Caching tarball of preloaded images
	I0722 04:29:58.756278    6477 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:29:58.756284    6477 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0722 04:29:58.756360    6477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/newest-cni-206000/config.json ...
	I0722 04:29:58.756380    6477 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/newest-cni-206000/config.json: {Name:mk016b4b03d0a78fd8c70c96dd8cf4b7a91663d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:29:58.756711    6477 start.go:360] acquireMachinesLock for newest-cni-206000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:29:58.756746    6477 start.go:364] duration metric: took 29.209µs to acquireMachinesLock for "newest-cni-206000"
	I0722 04:29:58.756757    6477 start.go:93] Provisioning new machine with config: &{Name:newest-cni-206000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-206000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:29:58.756790    6477 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:29:58.765102    6477 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:29:58.783361    6477 start.go:159] libmachine.API.Create for "newest-cni-206000" (driver="qemu2")
	I0722 04:29:58.783396    6477 client.go:168] LocalClient.Create starting
	I0722 04:29:58.783466    6477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:29:58.783498    6477 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:58.783508    6477 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:58.783545    6477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:29:58.783574    6477 main.go:141] libmachine: Decoding PEM data...
	I0722 04:29:58.783582    6477 main.go:141] libmachine: Parsing certificate...
	I0722 04:29:58.783977    6477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:29:58.926391    6477 main.go:141] libmachine: Creating SSH key...
	I0722 04:29:59.044818    6477 main.go:141] libmachine: Creating Disk image...
	I0722 04:29:59.044824    6477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:29:59.045013    6477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:29:59.054420    6477 main.go:141] libmachine: STDOUT: 
	I0722 04:29:59.054434    6477 main.go:141] libmachine: STDERR: 
	I0722 04:29:59.054497    6477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2 +20000M
	I0722 04:29:59.062357    6477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:29:59.062374    6477 main.go:141] libmachine: STDERR: 
	I0722 04:29:59.062386    6477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:29:59.062391    6477 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:29:59.062403    6477 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:29:59.062428    6477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a2:ab:0f:be:e9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:29:59.064066    6477 main.go:141] libmachine: STDOUT: 
	I0722 04:29:59.064080    6477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:29:59.064096    6477 client.go:171] duration metric: took 280.700625ms to LocalClient.Create
	I0722 04:30:01.066208    6477 start.go:128] duration metric: took 2.309436542s to createHost
	I0722 04:30:01.066255    6477 start.go:83] releasing machines lock for "newest-cni-206000", held for 2.309540583s
	W0722 04:30:01.066298    6477 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:01.075599    6477 out.go:177] * Deleting "newest-cni-206000" in qemu2 ...
	W0722 04:30:01.096907    6477 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:01.096954    6477 start.go:729] Will try again in 5 seconds ...
	I0722 04:30:06.099084    6477 start.go:360] acquireMachinesLock for newest-cni-206000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:30:06.099481    6477 start.go:364] duration metric: took 319.041µs to acquireMachinesLock for "newest-cni-206000"
	I0722 04:30:06.099645    6477 start.go:93] Provisioning new machine with config: &{Name:newest-cni-206000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-206000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:30:06.099953    6477 start.go:125] createHost starting for "" (driver="qemu2")
	I0722 04:30:06.104956    6477 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:30:06.153479    6477 start.go:159] libmachine.API.Create for "newest-cni-206000" (driver="qemu2")
	I0722 04:30:06.153535    6477 client.go:168] LocalClient.Create starting
	I0722 04:30:06.153638    6477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/ca.pem
	I0722 04:30:06.153688    6477 main.go:141] libmachine: Decoding PEM data...
	I0722 04:30:06.153704    6477 main.go:141] libmachine: Parsing certificate...
	I0722 04:30:06.153769    6477 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1127/.minikube/certs/cert.pem
	I0722 04:30:06.153799    6477 main.go:141] libmachine: Decoding PEM data...
	I0722 04:30:06.153813    6477 main.go:141] libmachine: Parsing certificate...
	I0722 04:30:06.154425    6477 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso...
	I0722 04:30:06.308734    6477 main.go:141] libmachine: Creating SSH key...
	I0722 04:30:06.384002    6477 main.go:141] libmachine: Creating Disk image...
	I0722 04:30:06.384007    6477 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0722 04:30:06.384213    6477 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2.raw /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:30:06.393449    6477 main.go:141] libmachine: STDOUT: 
	I0722 04:30:06.393473    6477 main.go:141] libmachine: STDERR: 
	I0722 04:30:06.393526    6477 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2 +20000M
	I0722 04:30:06.401347    6477 main.go:141] libmachine: STDOUT: Image resized.
	
	I0722 04:30:06.401362    6477 main.go:141] libmachine: STDERR: 
	I0722 04:30:06.401374    6477 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:30:06.401377    6477 main.go:141] libmachine: Starting QEMU VM...
	I0722 04:30:06.401388    6477 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:30:06.401422    6477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ba:5e:6d:b9:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:30:06.403004    6477 main.go:141] libmachine: STDOUT: 
	I0722 04:30:06.403020    6477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:30:06.403032    6477 client.go:171] duration metric: took 249.495666ms to LocalClient.Create
	I0722 04:30:08.405187    6477 start.go:128] duration metric: took 2.305235166s to createHost
	I0722 04:30:08.405252    6477 start.go:83] releasing machines lock for "newest-cni-206000", held for 2.305782375s
	W0722 04:30:08.405663    6477 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-206000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-206000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:08.418272    6477 out.go:177] 
	W0722 04:30:08.425330    6477 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:30:08.425357    6477 out.go:239] * 
	* 
	W0722 04:30:08.427881    6477 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:30:08.436302    6477 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-206000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000: exit status 7 (65.909666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-966000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-966000 create -f testdata/busybox.yaml: exit status 1 (30.306833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-966000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-966000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (27.827834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (29.079334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-966000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-966000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-966000 describe deploy/metrics-server -n kube-system: exit status 1 (26.093166ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-966000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-966000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (28.079625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-966000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-966000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3: exit status 80 (6.171558417s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-966000" primary control-plane node in "default-k8s-diff-port-966000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-966000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:30:07.359466    6787 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:30:07.359610    6787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:07.359613    6787 out.go:304] Setting ErrFile to fd 2...
	I0722 04:30:07.359616    6787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:07.359753    6787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:30:07.360718    6787 out.go:298] Setting JSON to false
	I0722 04:30:07.376821    6787 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5376,"bootTime":1721642431,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:30:07.376898    6787 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:30:07.381728    6787 out.go:177] * [default-k8s-diff-port-966000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:30:07.388591    6787 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:30:07.388646    6787 notify.go:220] Checking for updates...
	I0722 04:30:07.395717    6787 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:30:07.398731    6787 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:30:07.401713    6787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:30:07.404744    6787 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:30:07.405984    6787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:30:07.409074    6787 config.go:182] Loaded profile config "default-k8s-diff-port-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:30:07.409337    6787 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:30:07.412681    6787 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:30:07.417723    6787 start.go:297] selected driver: qemu2
	I0722 04:30:07.417732    6787 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-966000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:30:07.417814    6787 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:30:07.420119    6787 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:30:07.420160    6787 cni.go:84] Creating CNI manager for ""
	I0722 04:30:07.420167    6787 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:30:07.420202    6787 start.go:340] cluster config:
	{Name:default-k8s-diff-port-966000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:default-k8s-diff-port-966000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:30:07.423691    6787 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:30:07.431683    6787 out.go:177] * Starting "default-k8s-diff-port-966000" primary control-plane node in "default-k8s-diff-port-966000" cluster
	I0722 04:30:07.435773    6787 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:30:07.435790    6787 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 04:30:07.435804    6787 cache.go:56] Caching tarball of preloaded images
	I0722 04:30:07.435871    6787 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:30:07.435877    6787 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:30:07.435940    6787 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/default-k8s-diff-port-966000/config.json ...
	I0722 04:30:07.436395    6787 start.go:360] acquireMachinesLock for default-k8s-diff-port-966000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:30:08.405433    6787 start.go:364] duration metric: took 969.023ms to acquireMachinesLock for "default-k8s-diff-port-966000"
	I0722 04:30:08.405576    6787 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:30:08.405612    6787 fix.go:54] fixHost starting: 
	I0722 04:30:08.406308    6787 fix.go:112] recreateIfNeeded on default-k8s-diff-port-966000: state=Stopped err=<nil>
	W0722 04:30:08.406352    6787 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:30:08.422315    6787 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-966000" ...
	I0722 04:30:08.429331    6787 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:30:08.429521    6787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:93:c7:3c:01:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:30:08.439482    6787 main.go:141] libmachine: STDOUT: 
	I0722 04:30:08.439574    6787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:30:08.439702    6787 fix.go:56] duration metric: took 34.07975ms for fixHost
	I0722 04:30:08.439721    6787 start.go:83] releasing machines lock for "default-k8s-diff-port-966000", held for 34.252ms
	W0722 04:30:08.439757    6787 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:30:08.439987    6787 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:08.440011    6787 start.go:729] Will try again in 5 seconds ...
	I0722 04:30:13.442175    6787 start.go:360] acquireMachinesLock for default-k8s-diff-port-966000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:30:13.442648    6787 start.go:364] duration metric: took 370.5µs to acquireMachinesLock for "default-k8s-diff-port-966000"
	I0722 04:30:13.442775    6787 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:30:13.442796    6787 fix.go:54] fixHost starting: 
	I0722 04:30:13.443584    6787 fix.go:112] recreateIfNeeded on default-k8s-diff-port-966000: state=Stopped err=<nil>
	W0722 04:30:13.443611    6787 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:30:13.454095    6787 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-966000" ...
	I0722 04:30:13.457202    6787 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:30:13.457385    6787 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:93:c7:3c:01:2e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/default-k8s-diff-port-966000/disk.qcow2
	I0722 04:30:13.466821    6787 main.go:141] libmachine: STDOUT: 
	I0722 04:30:13.466895    6787 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:30:13.466996    6787 fix.go:56] duration metric: took 24.20175ms for fixHost
	I0722 04:30:13.467016    6787 start.go:83] releasing machines lock for "default-k8s-diff-port-966000", held for 24.343208ms
	W0722 04:30:13.467267    6787 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-966000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:13.476274    6787 out.go:177] 
	W0722 04:30:13.479367    6787 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:30:13.479410    6787 out.go:239] * 
	* 
	W0722 04:30:13.482010    6787 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:30:13.490205    6787 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-966000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.30.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (66.281375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-206000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-206000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0: exit status 80 (5.194609958s)

                                                
                                                
-- stdout --
	* [newest-cni-206000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-206000" primary control-plane node in "newest-cni-206000" cluster
	* Restarting existing qemu2 VM for "newest-cni-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-206000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:30:12.455893    6822 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:30:12.456057    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:12.456060    6822 out.go:304] Setting ErrFile to fd 2...
	I0722 04:30:12.456063    6822 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:12.456191    6822 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:30:12.457231    6822 out.go:298] Setting JSON to false
	I0722 04:30:12.473877    6822 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5381,"bootTime":1721642431,"procs":461,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 04:30:12.473950    6822 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:30:12.477638    6822 out.go:177] * [newest-cni-206000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 04:30:12.485642    6822 notify.go:220] Checking for updates...
	I0722 04:30:12.490614    6822 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:30:12.498540    6822 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 04:30:12.502583    6822 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 04:30:12.506540    6822 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:30:12.509549    6822 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 04:30:12.513599    6822 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:30:12.517874    6822 config.go:182] Loaded profile config "newest-cni-206000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:30:12.518149    6822 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:30:12.521524    6822 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 04:30:12.528617    6822 start.go:297] selected driver: qemu2
	I0722 04:30:12.528624    6822 start.go:901] validating driver "qemu2" against &{Name:newest-cni-206000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-206000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:30:12.528683    6822 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:30:12.531012    6822 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0722 04:30:12.531036    6822 cni.go:84] Creating CNI manager for ""
	I0722 04:30:12.531044    6822 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:30:12.531074    6822 start.go:340] cluster config:
	{Name:newest-cni-206000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:newest-cni-206000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:30:12.534496    6822 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:30:12.539561    6822 out.go:177] * Starting "newest-cni-206000" primary control-plane node in "newest-cni-206000" cluster
	I0722 04:30:12.547574    6822 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 04:30:12.547592    6822 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0722 04:30:12.547600    6822 cache.go:56] Caching tarball of preloaded images
	I0722 04:30:12.547658    6822 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0722 04:30:12.547663    6822 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0722 04:30:12.547723    6822 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/newest-cni-206000/config.json ...
	I0722 04:30:12.548330    6822 start.go:360] acquireMachinesLock for newest-cni-206000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:30:12.548370    6822 start.go:364] duration metric: took 33.125µs to acquireMachinesLock for "newest-cni-206000"
	I0722 04:30:12.548384    6822 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:30:12.548392    6822 fix.go:54] fixHost starting: 
	I0722 04:30:12.548526    6822 fix.go:112] recreateIfNeeded on newest-cni-206000: state=Stopped err=<nil>
	W0722 04:30:12.548535    6822 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:30:12.553447    6822 out.go:177] * Restarting existing qemu2 VM for "newest-cni-206000" ...
	I0722 04:30:12.559502    6822 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:30:12.559537    6822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ba:5e:6d:b9:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:30:12.561661    6822 main.go:141] libmachine: STDOUT: 
	I0722 04:30:12.561683    6822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:30:12.561711    6822 fix.go:56] duration metric: took 13.320625ms for fixHost
	I0722 04:30:12.561716    6822 start.go:83] releasing machines lock for "newest-cni-206000", held for 13.341625ms
	W0722 04:30:12.561724    6822 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:30:12.561761    6822 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:12.561766    6822 start.go:729] Will try again in 5 seconds ...
	I0722 04:30:17.563928    6822 start.go:360] acquireMachinesLock for newest-cni-206000: {Name:mkd413881e612ea8d9ddb0175c22cca270cd2452 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:30:17.564368    6822 start.go:364] duration metric: took 340.167µs to acquireMachinesLock for "newest-cni-206000"
	I0722 04:30:17.564442    6822 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:30:17.564470    6822 fix.go:54] fixHost starting: 
	I0722 04:30:17.565193    6822 fix.go:112] recreateIfNeeded on newest-cni-206000: state=Stopped err=<nil>
	W0722 04:30:17.565220    6822 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:30:17.570471    6822 out.go:177] * Restarting existing qemu2 VM for "newest-cni-206000" ...
	I0722 04:30:17.578387    6822 qemu.go:418] Using hvf for hardware acceleration
	I0722 04:30:17.578606    6822 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:ba:5e:6d:b9:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19313-1127/.minikube/machines/newest-cni-206000/disk.qcow2
	I0722 04:30:17.586218    6822 main.go:141] libmachine: STDOUT: 
	I0722 04:30:17.586281    6822 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0722 04:30:17.586353    6822 fix.go:56] duration metric: took 21.893375ms for fixHost
	I0722 04:30:17.586370    6822 start.go:83] releasing machines lock for "newest-cni-206000", held for 21.982333ms
	W0722 04:30:17.586562    6822 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-206000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-206000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0722 04:30:17.594368    6822 out.go:177] 
	W0722 04:30:17.598480    6822 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0722 04:30:17.598530    6822 out.go:239] * 
	* 
	W0722 04:30:17.599842    6822 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:30:17.610459    6822 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-206000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000: exit status 7 (67.282709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-966000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (30.94725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-966000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-966000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-966000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.80125ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-966000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-966000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (29.691208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-966000 image list --format=json
start_stop_delete_test.go:304: v1.30.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.3",
- 	"registry.k8s.io/kube-controller-manager:v1.30.3",
- 	"registry.k8s.io/kube-proxy:v1.30.3",
- 	"registry.k8s.io/kube-scheduler:v1.30.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (30.317167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-966000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-966000 --alsologtostderr -v=1: exit status 83 (43.165791ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-966000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-966000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:30:13.755555    6846 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:30:13.755690    6846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:13.755693    6846 out.go:304] Setting ErrFile to fd 2...
	I0722 04:30:13.755696    6846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:13.755829    6846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:30:13.756035    6846 out.go:298] Setting JSON to false
	I0722 04:30:13.756042    6846 mustload.go:65] Loading cluster: default-k8s-diff-port-966000
	I0722 04:30:13.756234    6846 config.go:182] Loaded profile config "default-k8s-diff-port-966000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:30:13.760118    6846 out.go:177] * The control-plane node default-k8s-diff-port-966000 host is not running: state=Stopped
	I0722 04:30:13.768080    6846 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-966000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-966000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (27.5985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (28.025125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-966000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-206000 image list --format=json
start_stop_delete_test.go:304: v1.31.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.14-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0-beta.0",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000: exit status 7 (29.864291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-206000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-206000 --alsologtostderr -v=1: exit status 83 (38.503125ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-206000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-206000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:30:17.793519    6871 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:30:17.793673    6871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:17.793677    6871 out.go:304] Setting ErrFile to fd 2...
	I0722 04:30:17.793679    6871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:30:17.793823    6871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 04:30:17.794035    6871 out.go:298] Setting JSON to false
	I0722 04:30:17.794041    6871 mustload.go:65] Loading cluster: newest-cni-206000
	I0722 04:30:17.794240    6871 config.go:182] Loaded profile config "newest-cni-206000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:30:17.797245    6871 out.go:177] * The control-plane node newest-cni-206000 host is not running: state=Stopped
	I0722 04:30:17.800277    6871 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-206000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-206000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000: exit status 7 (29.728833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-206000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000: exit status 7 (29.523125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-206000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (161/278)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.11
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.30.3/json-events 16.99
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.11
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.1
21 TestDownloadOnly/v1.31.0-beta.0/json-events 12.63
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.11
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.1
30 TestBinaryMirror 0.31
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 226.65
38 TestAddons/parallel/Registry 18.29
39 TestAddons/parallel/Ingress 19.78
40 TestAddons/parallel/InspektorGadget 10.22
41 TestAddons/parallel/MetricsServer 5.24
44 TestAddons/parallel/CSI 42.85
45 TestAddons/parallel/Headlamp 11.43
46 TestAddons/parallel/CloudSpanner 5.16
47 TestAddons/parallel/LocalPath 55.79
48 TestAddons/parallel/NvidiaDevicePlugin 5.15
49 TestAddons/parallel/Yakd 5
50 TestAddons/parallel/Volcano 38.87
53 TestAddons/serial/GCPAuth/Namespaces 0.07
54 TestAddons/StoppedEnableDisable 12.39
62 TestHyperKitDriverInstallOrUpdate 10.82
65 TestErrorSpam/setup 34.33
66 TestErrorSpam/start 0.34
67 TestErrorSpam/status 0.25
68 TestErrorSpam/pause 0.62
69 TestErrorSpam/unpause 0.58
70 TestErrorSpam/stop 64.28
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 51.09
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 60.68
77 TestFunctional/serial/KubeContext 0.03
78 TestFunctional/serial/KubectlGetPods 0.04
81 TestFunctional/serial/CacheCmd/cache/add_remote 9.85
82 TestFunctional/serial/CacheCmd/cache/add_local 1.08
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
84 TestFunctional/serial/CacheCmd/cache/list 0.03
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.07
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.34
87 TestFunctional/serial/CacheCmd/cache/delete 0.07
88 TestFunctional/serial/MinikubeKubectlCmd 0.66
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
90 TestFunctional/serial/ExtraConfig 34.08
91 TestFunctional/serial/ComponentHealth 0.04
92 TestFunctional/serial/LogsCmd 0.65
93 TestFunctional/serial/LogsFileCmd 0.67
94 TestFunctional/serial/InvalidService 4.23
96 TestFunctional/parallel/ConfigCmd 0.22
97 TestFunctional/parallel/DashboardCmd 12.58
98 TestFunctional/parallel/DryRun 0.23
99 TestFunctional/parallel/InternationalLanguage 0.1
100 TestFunctional/parallel/StatusCmd 0.24
105 TestFunctional/parallel/AddonsCmd 0.1
106 TestFunctional/parallel/PersistentVolumeClaim 24.57
108 TestFunctional/parallel/SSHCmd 0.12
109 TestFunctional/parallel/CpCmd 0.45
111 TestFunctional/parallel/FileSync 0.06
112 TestFunctional/parallel/CertSync 0.39
116 TestFunctional/parallel/NodeLabels 0.04
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.1
120 TestFunctional/parallel/License 0.21
121 TestFunctional/parallel/Version/short 0.04
122 TestFunctional/parallel/Version/components 0.19
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.08
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.07
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.07
127 TestFunctional/parallel/ImageCommands/ImageBuild 5.97
128 TestFunctional/parallel/ImageCommands/Setup 1.73
129 TestFunctional/parallel/DockerEnv/bash 0.28
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
133 TestFunctional/parallel/ServiceCmd/DeployApp 14.08
134 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.5
135 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
136 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.14
137 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.13
138 TestFunctional/parallel/ImageCommands/ImageRemove 0.18
139 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
140 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.17
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.21
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.1
146 TestFunctional/parallel/ServiceCmd/List 0.08
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.08
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.09
149 TestFunctional/parallel/ServiceCmd/Format 0.09
150 TestFunctional/parallel/ServiceCmd/URL 0.09
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.04
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.12
158 TestFunctional/parallel/ProfileCmd/profile_list 0.12
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
160 TestFunctional/parallel/MountCmd/any-port 8.98
161 TestFunctional/parallel/MountCmd/specific-port 0.97
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
163 TestFunctional/delete_echo-server_images 0.03
164 TestFunctional/delete_my-image_image 0.01
165 TestFunctional/delete_minikube_cached_images 0.01
169 TestMultiControlPlane/serial/StartCluster 257.38
170 TestMultiControlPlane/serial/DeployApp 3.97
171 TestMultiControlPlane/serial/PingHostFromPods 0.77
172 TestMultiControlPlane/serial/AddWorkerNode 63.72
173 TestMultiControlPlane/serial/NodeLabels 0.14
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.25
175 TestMultiControlPlane/serial/CopyFile 4.34
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 150.09
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 3.4
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.2
217 TestMainNoArgs 0.03
264 TestStoppedBinaryUpgrade/Setup 2.14
276 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
280 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
281 TestNoKubernetes/serial/ProfileList 31.36
282 TestNoKubernetes/serial/Stop 3.03
284 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
294 TestStoppedBinaryUpgrade/MinikubeLogs 0.7
299 TestStartStop/group/old-k8s-version/serial/Stop 3.45
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
312 TestStartStop/group/no-preload/serial/Stop 2.03
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/embed-certs/serial/Stop 3.38
318 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.12
334 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.15
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
339 TestStartStop/group/newest-cni/serial/Stop 3.76
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.09
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-521000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-521000: exit status 85 (90.911208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-521000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |          |
	|         | -p download-only-521000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:28:11
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:28:11.820631    1620 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:28:11.820788    1620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:11.820792    1620 out.go:304] Setting ErrFile to fd 2...
	I0722 03:28:11.820794    1620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:11.820915    1620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	W0722 03:28:11.820992    1620 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19313-1127/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19313-1127/.minikube/config/config.json: no such file or directory
	I0722 03:28:11.822254    1620 out.go:298] Setting JSON to true
	I0722 03:28:11.839573    1620 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1660,"bootTime":1721642431,"procs":450,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 03:28:11.839647    1620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:28:11.846652    1620 out.go:97] [download-only-521000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 03:28:11.846778    1620 notify.go:220] Checking for updates...
	W0722 03:28:11.846786    1620 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball: no such file or directory
	I0722 03:28:11.849668    1620 out.go:169] MINIKUBE_LOCATION=19313
	I0722 03:28:11.852744    1620 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:28:11.857683    1620 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 03:28:11.860712    1620 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:28:11.863708    1620 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	W0722 03:28:11.869667    1620 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 03:28:11.869871    1620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:28:11.875808    1620 out.go:97] Using the qemu2 driver based on user configuration
	I0722 03:28:11.875832    1620 start.go:297] selected driver: qemu2
	I0722 03:28:11.875836    1620 start.go:901] validating driver "qemu2" against <nil>
	I0722 03:28:11.875939    1620 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 03:28:11.879613    1620 out.go:169] Automatically selected the socket_vmnet network
	I0722 03:28:11.886494    1620 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0722 03:28:11.886581    1620 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 03:28:11.886648    1620 cni.go:84] Creating CNI manager for ""
	I0722 03:28:11.886665    1620 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0722 03:28:11.886729    1620 start.go:340] cluster config:
	{Name:download-only-521000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-521000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:28:11.892081    1620 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:11.896532    1620 out.go:97] Downloading VM boot image ...
	I0722 03:28:11.896546    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/iso/arm64/minikube-v1.33.1-1721324531-19298-arm64.iso
	I0722 03:28:18.263064    1620 out.go:97] Starting "download-only-521000" primary control-plane node in "download-only-521000" cluster
	I0722 03:28:18.263102    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:18.315099    1620 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0722 03:28:18.315117    1620 cache.go:56] Caching tarball of preloaded images
	I0722 03:28:18.315255    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:18.320553    1620 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0722 03:28:18.320559    1620 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:18.401544    1620 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0722 03:28:28.716491    1620 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:28.716646    1620 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:29.412509    1620 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0722 03:28:29.412710    1620 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-521000/config.json ...
	I0722 03:28:29.412742    1620 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-521000/config.json: {Name:mk9ba44c13276aeb01bcbfbf249d7d467b0155f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:28:29.412972    1620 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:29.413153    1620 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0722 03:28:30.205356    1620 out.go:169] 
	W0722 03:28:30.209482    1620 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60 0x106ea9a60] Decompressors:map[bz2:0x1400080cda0 gz:0x1400080cda8 tar:0x1400080cd30 tar.bz2:0x1400080cd40 tar.gz:0x1400080cd70 tar.xz:0x1400080cd80 tar.zst:0x1400080cd90 tbz2:0x1400080cd40 tgz:0x1400080cd70 txz:0x1400080cd80 tzst:0x1400080cd90 xz:0x1400080cdb0 zip:0x1400080cde0 zst:0x1400080cdb8] Getters:map[file:0x14000791600 http:0x1400098a280 https:0x1400098a2d0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0722 03:28:30.209507    1620 out_reason.go:110] 
	W0722 03:28:30.215447    1620 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 03:28:30.219335    1620 out.go:169] 
	
	
	* The control-plane node download-only-521000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-521000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-521000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (16.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-931000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-931000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=qemu2 : (16.990698708s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (16.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-931000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-931000: exit status 85 (76.847125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-521000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-521000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| delete  | -p download-only-521000        | download-only-521000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| start   | -o=json --download-only        | download-only-931000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-931000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:28:30
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:28:30.623054    1645 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:28:30.623183    1645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:30.623186    1645 out.go:304] Setting ErrFile to fd 2...
	I0722 03:28:30.623189    1645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:30.623337    1645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:28:30.624437    1645 out.go:298] Setting JSON to true
	I0722 03:28:30.640405    1645 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1679,"bootTime":1721642431,"procs":451,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 03:28:30.640481    1645 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:28:30.644180    1645 out.go:97] [download-only-931000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 03:28:30.644285    1645 notify.go:220] Checking for updates...
	I0722 03:28:30.648033    1645 out.go:169] MINIKUBE_LOCATION=19313
	I0722 03:28:30.651056    1645 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:28:30.654097    1645 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 03:28:30.657091    1645 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:28:30.660141    1645 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	W0722 03:28:30.666063    1645 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 03:28:30.666208    1645 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:28:30.669044    1645 out.go:97] Using the qemu2 driver based on user configuration
	I0722 03:28:30.669053    1645 start.go:297] selected driver: qemu2
	I0722 03:28:30.669056    1645 start.go:901] validating driver "qemu2" against <nil>
	I0722 03:28:30.669095    1645 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 03:28:30.672160    1645 out.go:169] Automatically selected the socket_vmnet network
	I0722 03:28:30.677192    1645 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0722 03:28:30.677294    1645 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 03:28:30.677313    1645 cni.go:84] Creating CNI manager for ""
	I0722 03:28:30.677322    1645 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 03:28:30.677327    1645 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 03:28:30.677376    1645 start.go:340] cluster config:
	{Name:download-only-931000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-931000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:28:30.680745    1645 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:30.684097    1645 out.go:97] Starting "download-only-931000" primary control-plane node in "download-only-931000" cluster
	I0722 03:28:30.684108    1645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:28:30.737295    1645 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 03:28:30.737306    1645 cache.go:56] Caching tarball of preloaded images
	I0722 03:28:30.737468    1645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:28:30.741789    1645 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0722 03:28:30.741797    1645 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:30.814041    1645 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4?checksum=md5:5a76dba1959f6b6fc5e29e1e172ab9ca -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4
	I0722 03:28:42.743926    1645 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:42.744086    1645 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:43.286558    1645 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:28:43.286759    1645 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-931000/config.json ...
	I0722 03:28:43.286779    1645 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-931000/config.json: {Name:mkfe2e1e8b2035fbf9778663c9d046e1777b2fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:28:43.287020    1645 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:28:43.287143    1645 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-931000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-931000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-931000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (12.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-903000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-903000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=qemu2 : (12.627823625s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (12.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-903000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-903000: exit status 85 (77.550875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-521000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-521000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| delete  | -p download-only-521000             | download-only-521000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| start   | -o=json --download-only             | download-only-931000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-931000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| delete  | -p download-only-931000             | download-only-931000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| start   | -o=json --download-only             | download-only-903000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-903000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:28:47
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:28:47.900034    1667 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:28:47.900160    1667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:47.900164    1667 out.go:304] Setting ErrFile to fd 2...
	I0722 03:28:47.900166    1667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:47.900282    1667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:28:47.901311    1667 out.go:298] Setting JSON to true
	I0722 03:28:47.917126    1667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":1696,"bootTime":1721642431,"procs":445,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 03:28:47.917198    1667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:28:47.922222    1667 out.go:97] [download-only-903000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 03:28:47.922309    1667 notify.go:220] Checking for updates...
	I0722 03:28:47.928171    1667 out.go:169] MINIKUBE_LOCATION=19313
	I0722 03:28:47.932214    1667 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:28:47.936167    1667 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 03:28:47.939278    1667 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:28:47.942197    1667 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	W0722 03:28:47.948122    1667 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 03:28:47.948282    1667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:28:47.951192    1667 out.go:97] Using the qemu2 driver based on user configuration
	I0722 03:28:47.951202    1667 start.go:297] selected driver: qemu2
	I0722 03:28:47.951205    1667 start.go:901] validating driver "qemu2" against <nil>
	I0722 03:28:47.951274    1667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 03:28:47.952683    1667 out.go:169] Automatically selected the socket_vmnet network
	I0722 03:28:47.957440    1667 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0722 03:28:47.957526    1667 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 03:28:47.957543    1667 cni.go:84] Creating CNI manager for ""
	I0722 03:28:47.957552    1667 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 03:28:47.957561    1667 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 03:28:47.957602    1667 start.go:340] cluster config:
	{Name:download-only-903000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-903000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:28:47.961226    1667 iso.go:125] acquiring lock: {Name:mkd71eaf3e91c1dd737b75fca5ca69ff9bdad18f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:47.964177    1667 out.go:97] Starting "download-only-903000" primary control-plane node in "download-only-903000" cluster
	I0722 03:28:47.964187    1667 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 03:28:48.019910    1667 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0722 03:28:48.019933    1667 cache.go:56] Caching tarball of preloaded images
	I0722 03:28:48.020116    1667 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 03:28:48.023221    1667 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0722 03:28:48.023230    1667 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:48.098762    1667 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:5025ece13368183bde5a7f01207f4bc3 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0722 03:28:55.914897    1667 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:55.915052    1667 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0722 03:28:56.433418    1667 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0722 03:28:56.433607    1667 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-903000/config.json ...
	I0722 03:28:56.433628    1667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/download-only-903000/config.json: {Name:mk883c04086e3925fc4a283b23cbf4999261d634 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:28:56.433861    1667 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 03:28:56.433993    1667 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19313-1127/.minikube/cache/darwin/arm64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-903000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-903000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-903000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.31s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-965000 --alsologtostderr --binary-mirror http://127.0.0.1:49325 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-965000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-965000
--- PASS: TestBinaryMirror (0.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-974000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-974000: exit status 85 (52.687125ms)

                                                
                                                
-- stdout --
	* Profile "addons-974000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-974000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-974000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-974000: exit status 85 (56.4425ms)

                                                
                                                
-- stdout --
	* Profile "addons-974000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-974000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (226.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-974000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-974000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m46.65083075s)
--- PASS: TestAddons/Setup (226.65s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.660542ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-sxsl9" [96a1f9fd-b15b-4260-be16-05308483b4e7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004305167s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-jxxfm" [2e1c47e5-489b-49b5-bf76-ff18f45aeefb] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004027166s
addons_test.go:342: (dbg) Run:  kubectl --context addons-974000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-974000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-974000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (7.981470416s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 ip
2024/07/22 03:33:06 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.29s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-974000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-974000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-974000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [801160b2-fe70-4711-8961-4b0e1a90ec7f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [801160b2-fe70-4711-8961-4b0e1a90ec7f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003860625s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-974000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-arm64 -p addons-974000 addons disable ingress-dns --alsologtostderr -v=1: (1.011428167s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-974000 addons disable ingress --alsologtostderr -v=1: (7.202685792s)
--- PASS: TestAddons/parallel/Ingress (19.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-nkcqw" [86ecb5b3-cf0c-4e4d-9c57-d554904b8a19] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004060958s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-974000
addons_test.go:843: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-974000: (5.212118292s)
--- PASS: TestAddons/parallel/InspektorGadget (10.22s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.426125ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-r6p7f" [955bed3e-a22f-4893-970a-6eeb6930c230] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003939625s
addons_test.go:417: (dbg) Run:  kubectl --context addons-974000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.24s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.85s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 3.966417ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-974000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-974000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [380bf0d6-d5e7-4bc0-a9a0-3ac61c7ce8d1] Pending
helpers_test.go:344: "task-pv-pod" [380bf0d6-d5e7-4bc0-a9a0-3ac61c7ce8d1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [380bf0d6-d5e7-4bc0-a9a0-3ac61c7ce8d1] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004148833s
addons_test.go:586: (dbg) Run:  kubectl --context addons-974000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-974000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-974000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-974000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-974000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-974000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-974000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e8498a74-0e5a-442d-a869-50e627e3ade4] Pending
helpers_test.go:344: "task-pv-pod-restore" [e8498a74-0e5a-442d-a869-50e627e3ade4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e8498a74-0e5a-442d-a869-50e627e3ade4] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003576583s
addons_test.go:628: (dbg) Run:  kubectl --context addons-974000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-974000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-974000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-arm64 -p addons-974000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.086994875s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.85s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-974000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-v2fjv" [decdf162-3ad8-4ada-b544-bcd5f356f343] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-v2fjv" [decdf162-3ad8-4ada-b544-bcd5f356f343] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003768208s
--- PASS: TestAddons/parallel/Headlamp (11.43s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-vnxdd" [057752a9-2159-4b94-82e6-e9dfeeaf1d3f] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004203416s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-974000
--- PASS: TestAddons/parallel/CloudSpanner (5.16s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.79s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-974000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-974000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [05479e47-ebc1-4008-aeb4-94f6ecb040ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [05479e47-ebc1-4008-aeb4-94f6ecb040ca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [05479e47-ebc1-4008-aeb4-94f6ecb040ca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003434708s
addons_test.go:992: (dbg) Run:  kubectl --context addons-974000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 ssh "cat /opt/local-path-provisioner/pvc-03adebf2-6cf2-4579-b65e-47b5c2d754b3_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-974000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-974000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-arm64 -p addons-974000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.334814167s)
--- PASS: TestAddons/parallel/LocalPath (55.79s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-hh7p2" [f7dc7904-f55a-44c0-adf3-5550496d37c5] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004309417s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-974000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-96dzb" [a08a8560-dd96-47da-af5e-0fc0265b891b] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003880625s
--- PASS: TestAddons/parallel/Yakd (5.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (38.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 1.365209ms
addons_test.go:889: volcano-scheduler stabilized in 1.396209ms
addons_test.go:905: volcano-controller stabilized in 1.635709ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-n6wb7" [858325d6-29e7-4719-b914-7a609688fc6e] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.003452125s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-lvb2q" [dc7da1a8-6443-484b-8dca-6ddb7e9ad41a] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003534125s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-4tg6g" [af292799-2cbe-4cda-b855-188eef14ca59] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003366958s
addons_test.go:924: (dbg) Run:  kubectl --context addons-974000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-974000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-974000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [6fa64c0d-c3dc-4f88-b543-8f1504d2866f] Pending
helpers_test.go:344: "test-job-nginx-0" [6fa64c0d-c3dc-4f88-b543-8f1504d2866f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [6fa64c0d-c3dc-4f88-b543-8f1504d2866f] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 14.003323209s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-arm64 -p addons-974000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-arm64 -p addons-974000 addons disable volcano --alsologtostderr -v=1: (9.639161875s)
--- PASS: TestAddons/parallel/Volcano (38.87s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-974000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-974000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.07s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-974000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-974000: (12.205654s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-974000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-974000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-974000
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.82s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.82s)

                                                
                                    
x
+
TestErrorSpam/setup (34.33s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-898000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-898000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 --driver=qemu2 : (34.32685725s)
--- PASS: TestErrorSpam/setup (34.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.25s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 status
--- PASS: TestErrorSpam/status (0.25s)

                                                
                                    
x
+
TestErrorSpam/pause (0.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 pause
--- PASS: TestErrorSpam/pause (0.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 unpause
--- PASS: TestErrorSpam/unpause (0.58s)

                                                
                                    
x
+
TestErrorSpam/stop (64.28s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop: (12.202184458s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop: (26.03737325s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-898000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-898000 stop: (26.034488s)
--- PASS: TestErrorSpam/stop (64.28s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19313-1127/.minikube/files/etc/test/nested/copy/1618/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.09s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
E0722 03:37:48.154295    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:48.161077    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:48.173136    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:48.195178    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:48.237240    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:48.319303    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:48.481367    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:48.803476    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:49.445633    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-arm64 start -p functional-753000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (51.087522333s)
--- PASS: TestFunctional/serial/StartWithProxy (51.09s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (60.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --alsologtostderr -v=8
E0722 03:37:50.726669    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:53.288882    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:37:58.411115    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:38:08.653242    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:38:29.135333    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-arm64 start -p functional-753000 --alsologtostderr -v=8: (1m0.679430084s)
functional_test.go:659: soft start took 1m0.67980075s for "functional-753000" cluster.
--- PASS: TestFunctional/serial/SoftStart (60.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-753000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (9.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:3.1: (3.776437125s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:3.3: (3.634257834s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 cache add registry.k8s.io/pause:latest: (2.435400041s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (9.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local4145637106/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache add minikube-local-cache-test:functional-753000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache delete minikube-local-cache-test:functional-753000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-753000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (66.850291ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 cache reload: (2.132033708s)
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 kubectl -- --context functional-753000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-753000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0722 03:39:10.097542    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-arm64 start -p functional-753000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.081706042s)
functional_test.go:757: restart took 34.081824709s for "functional-753000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-753000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd3766184549/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.67s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-753000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-753000: exit status 115 (99.4185ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:30813 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-753000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-753000 delete -f testdata/invalidsvc.yaml: (1.03492825s)
--- PASS: TestFunctional/serial/InvalidService (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 config get cpus: exit status 14 (32.271875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 config get cpus: exit status 14 (31.529042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-753000 --alsologtostderr -v=1]
E0722 03:40:32.019601    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-753000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2628: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.832916ms)

                                                
                                                
-- stdout --
	* [functional-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:40:29.888360    2611 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:40:29.888488    2611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:40:29.888492    2611 out.go:304] Setting ErrFile to fd 2...
	I0722 03:40:29.888494    2611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:40:29.888659    2611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:40:29.889710    2611 out.go:298] Setting JSON to false
	I0722 03:40:29.907806    2611 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2398,"bootTime":1721642431,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 03:40:29.907893    2611 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:40:29.912697    2611 out.go:177] * [functional-753000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0722 03:40:29.920703    2611 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:40:29.920831    2611 notify.go:220] Checking for updates...
	I0722 03:40:29.926651    2611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:40:29.929673    2611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 03:40:29.932611    2611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:40:29.935621    2611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 03:40:29.938647    2611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:40:29.939973    2611 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:40:29.940275    2611 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:40:29.944621    2611 out.go:177] * Using the qemu2 driver based on existing profile
	I0722 03:40:29.951514    2611 start.go:297] selected driver: qemu2
	I0722 03:40:29.951522    2611 start.go:901] validating driver "qemu2" against &{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:40:29.951574    2611 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:40:29.958602    2611 out.go:177] 
	W0722 03:40:29.962647    2611 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0722 03:40:29.966656    2611 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-753000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (104.058375ms)

                                                
                                                
-- stdout --
	* [functional-753000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:40:30.110006    2622 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:40:30.110128    2622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:40:30.110131    2622 out.go:304] Setting ErrFile to fd 2...
	I0722 03:40:30.110133    2622 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:40:30.110255    2622 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
	I0722 03:40:30.111719    2622 out.go:298] Setting JSON to false
	I0722 03:40:30.128856    2622 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":2399,"bootTime":1721642431,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W0722 03:40:30.128969    2622 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:40:30.133667    2622 out.go:177] * [functional-753000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0722 03:40:30.138678    2622 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:40:30.138774    2622 notify.go:220] Checking for updates...
	I0722 03:40:30.143969    2622 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	I0722 03:40:30.146650    2622 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0722 03:40:30.149641    2622 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:40:30.152653    2622 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	I0722 03:40:30.155611    2622 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:40:30.158916    2622 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:40:30.159159    2622 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:40:30.163671    2622 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0722 03:40:30.170623    2622 start.go:297] selected driver: qemu2
	I0722 03:40:30.170629    2622 start.go:901] validating driver "qemu2" against &{Name:functional-753000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.3 ClusterName:functional-753000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:40:30.170680    2622 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:40:30.175629    2622 out.go:177] 
	W0722 03:40:30.179640    2622 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0722 03:40:30.183658    2622 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [bcabe724-4a68-4e41-b621-987f25a3ca6b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003391875s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-753000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-753000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c44e1a4d-a410-4584-9858-da67c1c28674] Pending
helpers_test.go:344: "sp-pod" [c44e1a4d-a410-4584-9858-da67c1c28674] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c44e1a4d-a410-4584-9858-da67c1c28674] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003824791s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-753000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-753000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-753000 delete -f testdata/storage-provisioner/pod.yaml: (1.172808917s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [860fcd9a-cde3-42f5-adda-4a09324b6e0b] Pending
helpers_test.go:344: "sp-pod" [860fcd9a-cde3-42f5-adda-4a09324b6e0b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [860fcd9a-cde3-42f5-adda-4a09324b6e0b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003851834s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-753000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.57s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -n functional-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cp functional-753000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2621393839/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -n functional-753000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -n functional-753000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1618/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/test/nested/copy/1618/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1618.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/1618.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1618.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /usr/share/ca-certificates/1618.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/16182.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16182.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /usr/share/ca-certificates/16182.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-753000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "sudo systemctl is-active crio": exit status 1 (100.4365ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-753000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-753000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format short --alsologtostderr:
I0722 03:40:39.203751    2673 out.go:291] Setting OutFile to fd 1 ...
I0722 03:40:39.203918    2673 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:39.203922    2673 out.go:304] Setting ErrFile to fd 2...
I0722 03:40:39.203924    2673 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:39.204062    2673 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
I0722 03:40:39.204463    2673 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:39.204524    2673 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:39.205377    2673 ssh_runner.go:195] Run: systemctl --version
I0722 03:40:39.205386    2673 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
I0722 03:40:39.227800    2673 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.12-0          | 014faa467e297 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/kicbase/echo-server               | functional-753000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 61773190d42ff | 112MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/minikube-local-cache-test | functional-753000 | ed888f04f9c2c | 30B    |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 2351f570ed0ea | 87.9MB |
| docker.io/library/nginx                     | alpine            | 5461b18aaccf3 | 44.8MB |
| registry.k8s.io/pause                       | 3.9               | 829e9de338bd5 | 514kB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-scheduler              | v1.30.3           | d48f992a22722 | 60.5MB |
| docker.io/library/nginx                     | latest            | 443d199e8bfcc | 193MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 8e97cdb19e7cc | 107MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format table --alsologtostderr:
I0722 03:40:42.867486    2685 out.go:291] Setting OutFile to fd 1 ...
I0722 03:40:42.867654    2685 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:42.867661    2685 out.go:304] Setting ErrFile to fd 2...
I0722 03:40:42.867664    2685 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:42.867800    2685 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
I0722 03:40:42.868257    2685 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:42.868326    2685 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:42.869163    2685 ssh_runner.go:195] Run: systemctl --version
I0722 03:40:42.869174    2685 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
I0722 03:40:42.892866    2685 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format json --alsologtostderr:
[{"id":"61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"112000000"},{"id":"d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"60500000"},{"id":"5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"44800000"},{"id":"014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"139000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"87900000"},{"id":"443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"107000000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":
["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"ed888f04f9c2cb5764bd2beb5d33f4cbbe413ea57b69ce1d2d8b10b8b6bdd3c2","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-753000"],"size":"30"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-753000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"s
ize":"42300000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format json --alsologtostderr:
I0722 03:40:42.801060    2683 out.go:291] Setting OutFile to fd 1 ...
I0722 03:40:42.801230    2683 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:42.801234    2683 out.go:304] Setting ErrFile to fd 2...
I0722 03:40:42.801236    2683 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:42.801380    2683 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
I0722 03:40:42.801847    2683 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:42.801921    2683 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:42.802739    2683 ssh_runner.go:195] Run: systemctl --version
I0722 03:40:42.802752    2683 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
I0722 03:40:42.824976    2683 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image ls --format yaml --alsologtostderr:
- id: 443d199e8bfcce69c2aa494b36b5f8b04c3b183277cd19190e9589fd8552d618
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "514000"
- id: d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "60500000"
- id: 014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "139000000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "112000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-753000
size: "4780000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: ed888f04f9c2cb5764bd2beb5d33f4cbbe413ea57b69ce1d2d8b10b8b6bdd3c2
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-753000
size: "30"
- id: 8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "107000000"
- id: 2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "87900000"
- id: 5461b18aaccf366faf9fba071a5f1ac333cd13435366b32c5e9b8ec903fa18a1
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "44800000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image ls --format yaml --alsologtostderr:
I0722 03:40:39.278712    2675 out.go:291] Setting OutFile to fd 1 ...
I0722 03:40:39.278897    2675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:39.278901    2675 out.go:304] Setting ErrFile to fd 2...
I0722 03:40:39.278903    2675 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:39.279059    2675 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
I0722 03:40:39.279525    2675 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:39.279587    2675 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:39.280459    2675 ssh_runner.go:195] Run: systemctl --version
I0722 03:40:39.280467    2675 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
I0722 03:40:39.302759    2675 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh pgrep buildkitd: exit status 1 (56.188042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image build -t localhost/my-image:functional-753000 testdata/build --alsologtostderr
2024/07/22 03:40:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 image build -t localhost/my-image:functional-753000 testdata/build --alsologtostderr: (5.832166875s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-753000 image build -t localhost/my-image:functional-753000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
a01966dde7f8: Pulling fs layer
a01966dde7f8: Verifying Checksum
a01966dde7f8: Download complete
a01966dde7f8: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> 71a676dd070f
Step 2/3 : RUN true
---> Running in 5fe8a1166653
---> Removed intermediate container 5fe8a1166653
---> 65fcbe13c158
Step 3/3 : ADD content.txt /
---> df99abf3ed23
Successfully built df99abf3ed23
Successfully tagged localhost/my-image:functional-753000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-753000 image build -t localhost/my-image:functional-753000 testdata/build --alsologtostderr:
I0722 03:40:39.402728    2679 out.go:291] Setting OutFile to fd 1 ...
I0722 03:40:39.402988    2679 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:39.402992    2679 out.go:304] Setting ErrFile to fd 2...
I0722 03:40:39.402994    2679 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:40:39.403137    2679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1127/.minikube/bin
I0722 03:40:39.403583    2679 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:39.404358    2679 config.go:182] Loaded profile config "functional-753000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:40:39.405264    2679 ssh_runner.go:195] Run: systemctl --version
I0722 03:40:39.405273    2679 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1127/.minikube/machines/functional-753000/id_rsa Username:docker}
I0722 03:40:39.427710    2679 build_images.go:161] Building image from path: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2945635067.tar
I0722 03:40:39.427776    2679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0722 03:40:39.432506    2679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2945635067.tar
I0722 03:40:39.434213    2679 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2945635067.tar: stat -c "%s %y" /var/lib/minikube/build/build.2945635067.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2945635067.tar': No such file or directory
I0722 03:40:39.434230    2679 ssh_runner.go:362] scp /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2945635067.tar --> /var/lib/minikube/build/build.2945635067.tar (3072 bytes)
I0722 03:40:39.446400    2679 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2945635067
I0722 03:40:39.452293    2679 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2945635067 -xf /var/lib/minikube/build/build.2945635067.tar
I0722 03:40:39.457626    2679 docker.go:360] Building image: /var/lib/minikube/build/build.2945635067
I0722 03:40:39.457704    2679 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-753000 /var/lib/minikube/build/build.2945635067
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0722 03:40:45.190816    2679 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-753000 /var/lib/minikube/build/build.2945635067: (5.733111833s)
I0722 03:40:45.190891    2679 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2945635067
I0722 03:40:45.194712    2679 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2945635067.tar
I0722 03:40:45.199986    2679 build_images.go:217] Built localhost/my-image:functional-753000 from /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/build.2945635067.tar
I0722 03:40:45.200014    2679 build_images.go:133] succeeded building to: functional-753000
I0722 03:40:45.200017    2679 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.718607917s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-753000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-753000 docker-env) && out/minikube-darwin-arm64 status -p functional-753000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-753000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-753000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-753000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-sh8vd" [0f82c872-47e4-4df3-8a25-f7ba42944f87] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-sh8vd" [0f82c872-47e4-4df3-8a25-f7ba42944f87] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.003407542s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-753000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image save docker.io/kicbase/echo-server:functional-753000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image rm docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-753000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 image save --daemon docker.io/kicbase/echo-server:functional-753000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-753000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2502: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-753000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c83ec0b2-5376-44a0-821e-f9361078db09] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c83ec0b2-5376-44a0-821e-f9361078db09] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004241792s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service list -o json
functional_test.go:1490: Took "80.052583ms" to run "out/minikube-darwin-arm64 -p functional-753000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.105.4:30434
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.105.4:30434
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-753000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.72.251 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-753000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "82.418125ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "33.615875ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "80.905583ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "34.778792ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1539469767/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721644827147790000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1539469767/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721644827147790000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1539469767/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721644827147790000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1539469767/001/test-1721644827147790000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (55.676792ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 22 10:40 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 22 10:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 22 10:40 test-1721644827147790000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh cat /mount-9p/test-1721644827147790000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-753000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5ffab46e-18e9-4276-b08f-1df8fb46981e] Pending
helpers_test.go:344: "busybox-mount" [5ffab46e-18e9-4276-b08f-1df8fb46981e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5ffab46e-18e9-4276-b08f-1df8fb46981e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5ffab46e-18e9-4276-b08f-1df8fb46981e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.003764417s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-753000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port1539469767/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2748459747/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (57.543292ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2748459747/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-753000 ssh "sudo umount -f /mount-9p": exit status 1 (58.4055ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-753000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port2748459747/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup987689935/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup987689935/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup987689935/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount1: (1.472262458s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-753000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-753000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup987689935/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup987689935/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-753000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup987689935/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-753000
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-753000
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-753000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (257.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-248000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0722 03:42:48.153604    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:43:15.861464    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
E0722 03:44:47.538596    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:47.544926    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:47.557002    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:47.579098    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:47.621168    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:47.701570    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:47.863321    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:48.185416    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:48.825989    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:50.108153    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:52.670263    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:44:57.792387    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-248000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (4m17.186316792s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (257.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (3.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-248000 -- rollout status deployment/busybox: (2.513592625s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-54nlt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-cmn2p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-xgttl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-54nlt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-cmn2p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-xgttl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-54nlt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-cmn2p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-xgttl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (3.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-54nlt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-54nlt -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-cmn2p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-cmn2p -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-xgttl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-248000 -- exec busybox-fc5497c4f-xgttl -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-248000 -v=7 --alsologtostderr
E0722 03:45:08.034532    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:45:28.516645    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 03:46:09.478765    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-248000 -v=7 --alsologtostderr: (1m3.487111833s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-248000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp testdata/cp-test.txt ha-248000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1456975827/001/cp-test_ha-248000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000:/home/docker/cp-test.txt ha-248000-m02:/home/docker/cp-test_ha-248000_ha-248000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test_ha-248000_ha-248000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000:/home/docker/cp-test.txt ha-248000-m03:/home/docker/cp-test_ha-248000_ha-248000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test_ha-248000_ha-248000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000:/home/docker/cp-test.txt ha-248000-m04:/home/docker/cp-test_ha-248000_ha-248000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test_ha-248000_ha-248000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp testdata/cp-test.txt ha-248000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m02:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1456975827/001/cp-test_ha-248000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m02:/home/docker/cp-test.txt ha-248000:/home/docker/cp-test_ha-248000-m02_ha-248000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test_ha-248000-m02_ha-248000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m02:/home/docker/cp-test.txt ha-248000-m03:/home/docker/cp-test_ha-248000-m02_ha-248000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test_ha-248000-m02_ha-248000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m02:/home/docker/cp-test.txt ha-248000-m04:/home/docker/cp-test_ha-248000-m02_ha-248000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test_ha-248000-m02_ha-248000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp testdata/cp-test.txt ha-248000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m03:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1456975827/001/cp-test_ha-248000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m03:/home/docker/cp-test.txt ha-248000:/home/docker/cp-test_ha-248000-m03_ha-248000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test_ha-248000-m03_ha-248000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m03:/home/docker/cp-test.txt ha-248000-m02:/home/docker/cp-test_ha-248000-m03_ha-248000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test_ha-248000-m03_ha-248000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m03:/home/docker/cp-test.txt ha-248000-m04:/home/docker/cp-test_ha-248000-m03_ha-248000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test_ha-248000-m03_ha-248000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp testdata/cp-test.txt ha-248000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m04:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestMultiControlPlaneserialCopyFile1456975827/001/cp-test_ha-248000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m04:/home/docker/cp-test.txt ha-248000:/home/docker/cp-test_ha-248000-m04_ha-248000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000 "sudo cat /home/docker/cp-test_ha-248000-m04_ha-248000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m04:/home/docker/cp-test.txt ha-248000-m02:/home/docker/cp-test_ha-248000-m04_ha-248000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m02 "sudo cat /home/docker/cp-test_ha-248000-m04_ha-248000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 cp ha-248000-m04:/home/docker/cp-test.txt ha-248000-m03:/home/docker/cp-test_ha-248000-m04_ha-248000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-248000 ssh -n ha-248000-m03 "sudo cat /home/docker/cp-test_ha-248000-m04_ha-248000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0722 04:01:10.561996    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
E0722 04:02:48.111243    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/addons-974000/client.crt: no such file or directory
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (2m30.085630792s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (150.09s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-398000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-398000 --output=json --user=testUser: (3.396685041s)
--- PASS: TestJSONOutput/stop/Command (3.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-120000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-120000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (91.849792ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a46f957-2eaf-44f4-b93a-efce12060fe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-120000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"baa93f7f-43d0-41f5-885f-b133ca3df466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19313"}}
	{"specversion":"1.0","id":"d4327eb1-6772-4bdd-8889-666ec0f7111e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig"}}
	{"specversion":"1.0","id":"e1fc57a4-c071-4027-bbfc-c8bfded7d406","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"6221cf6b-803e-4819-9b88-5b1f57071701","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"911b2309-3486-43d3-a413-0fb3d3c9c640","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube"}}
	{"specversion":"1.0","id":"a911ccc4-213d-4f7d-a218-436165530a0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"769ecac4-bd2f-4fde-be05-8cdb53ac63ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-120000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-120000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-179000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.740417ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-179000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1127/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1127/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-179000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-179000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.073458ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-179000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-179000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.621989667s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.741985792s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-179000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-179000: (3.024889459s)
--- PASS: TestNoKubernetes/serial/Stop (3.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-179000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-179000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (42.558792ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-179000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-179000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-239000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-765000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-765000 --alsologtostderr -v=3: (3.451723917s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-765000 -n old-k8s-version-765000: exit status 7 (46.488792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-765000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-239000 --alsologtostderr -v=3
E0722 04:29:47.455831    1618 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1127/.minikube/profiles/functional-753000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-239000 --alsologtostderr -v=3: (2.034567459s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-239000 -n no-preload-239000: exit status 7 (58.249083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-239000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-660000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-660000 --alsologtostderr -v=3: (3.377703125s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-660000 -n embed-certs-660000: exit status 7 (55.333208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-660000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-966000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-966000 --alsologtostderr -v=3: (2.149836s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-966000 -n default-k8s-diff-port-966000: exit status 7 (55.09925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-966000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-206000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-206000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-206000 --alsologtostderr -v=3: (3.756776708s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-206000 -n newest-cni-206000: exit status 7 (31.455041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-206000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (23/278)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-055000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-055000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-055000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-055000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-055000"

                                                
                                                
----------------------- debugLogs end: cilium-055000 [took: 2.136885625s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-055000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-055000
--- SKIP: TestNetworkPlugins/group/cilium (2.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-015000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard