Test Report: QEMU_macOS 19531

                    
                      cca1ca437c91fbc205ce13fbbdef95295053f0ce:2024-08-29:35997
                    
                

Test fail (98/274)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.33
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.11
33 TestAddons/parallel/Registry 72.3
46 TestCertOptions 10.12
47 TestCertExpiration 195.63
48 TestDockerFlags 10.18
49 TestForceSystemdFlag 10.17
50 TestForceSystemdEnv 10.27
95 TestFunctional/parallel/ServiceCmdConnect 28.68
167 TestMultiControlPlane/serial/StopSecondaryNode 214.12
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 104.79
169 TestMultiControlPlane/serial/RestartSecondaryNode 208.97
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 283.48
172 TestMultiControlPlane/serial/DeleteSecondaryNode 0.1
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.07
174 TestMultiControlPlane/serial/StopCluster 251.16
175 TestMultiControlPlane/serial/RestartCluster 5.24
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.08
177 TestMultiControlPlane/serial/AddSecondaryNode 0.07
181 TestImageBuild/serial/Setup 9.94
184 TestJSONOutput/start/Command 9.81
190 TestJSONOutput/pause/Command 0.08
196 TestJSONOutput/unpause/Command 0.04
213 TestMinikubeProfile 10.22
216 TestMountStart/serial/StartWithMountFirst 10.03
219 TestMultiNode/serial/FreshStart2Nodes 10.18
220 TestMultiNode/serial/DeployApp2Nodes 81.54
221 TestMultiNode/serial/PingHostFrom2Pods 0.09
222 TestMultiNode/serial/AddNode 0.07
223 TestMultiNode/serial/MultiNodeLabels 0.06
224 TestMultiNode/serial/ProfileList 0.07
225 TestMultiNode/serial/CopyFile 0.06
226 TestMultiNode/serial/StopNode 0.14
227 TestMultiNode/serial/StartAfterStop 40.39
228 TestMultiNode/serial/RestartKeepsNodes 7.45
229 TestMultiNode/serial/DeleteNode 0.1
230 TestMultiNode/serial/StopMultiNode 2.17
231 TestMultiNode/serial/RestartMultiNode 5.26
232 TestMultiNode/serial/ValidateNameConflict 19.98
236 TestPreload 9.91
238 TestScheduledStopUnix 10.09
239 TestSkaffold 12.73
242 TestRunningBinaryUpgrade 655.73
244 TestKubernetesUpgrade 19.04
258 TestStoppedBinaryUpgrade/Upgrade 610.03
268 TestPause/serial/Start 10.12
271 TestNoKubernetes/serial/StartWithK8s 9.98
272 TestNoKubernetes/serial/StartWithStopK8s 5.33
273 TestNoKubernetes/serial/Start 6.07
276 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.87
278 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.48
279 TestNoKubernetes/serial/StartNoArgs 5.31
281 TestNetworkPlugins/group/auto/Start 9.86
282 TestNetworkPlugins/group/calico/Start 10.04
283 TestNetworkPlugins/group/custom-flannel/Start 9.92
284 TestNetworkPlugins/group/false/Start 10.01
285 TestNetworkPlugins/group/kindnet/Start 9.83
286 TestNetworkPlugins/group/flannel/Start 9.81
287 TestNetworkPlugins/group/enable-default-cni/Start 9.93
288 TestNetworkPlugins/group/bridge/Start 9.89
289 TestNetworkPlugins/group/kubenet/Start 9.9
291 TestStartStop/group/old-k8s-version/serial/FirstStart 10.01
292 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
293 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
296 TestStartStop/group/old-k8s-version/serial/SecondStart 5.26
297 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
298 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
299 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
300 TestStartStop/group/old-k8s-version/serial/Pause 0.1
302 TestStartStop/group/no-preload/serial/FirstStart 10.03
303 TestStartStop/group/no-preload/serial/DeployApp 0.09
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
307 TestStartStop/group/no-preload/serial/SecondStart 5.25
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
311 TestStartStop/group/no-preload/serial/Pause 0.1
313 TestStartStop/group/embed-certs/serial/FirstStart 9.94
314 TestStartStop/group/embed-certs/serial/DeployApp 0.09
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
318 TestStartStop/group/embed-certs/serial/SecondStart 5.26
319 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
320 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
321 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
322 TestStartStop/group/embed-certs/serial/Pause 0.1
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.09
326 TestStartStop/group/newest-cni/serial/FirstStart 10.11
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 6.08
336 TestStartStop/group/newest-cni/serial/SecondStart 5.25
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
339 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
340 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
344 TestStartStop/group/newest-cni/serial/Pause 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (28.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-031000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-031000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (28.32690575s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2862a020-5b40-4e07-b5ef-af3e3df7c584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-031000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"895d34fb-60b5-42ed-ad68-7992f8fd5058","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"4de42d57-bfca-4600-b750-0c5ccf128f4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig"}}
	{"specversion":"1.0","id":"46b7a229-42c2-4af2-aca4-2c92ad4340c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0e403467-219a-4571-9cc9-d3cfc93591b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4c915b8e-48b2-4dcd-ba04-7d0a2eefe11b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube"}}
	{"specversion":"1.0","id":"f3dca106-8278-4322-a487-9db1d37a39e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6982965e-78d3-4b74-a1c6-98c627a81781","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"347bc719-33cf-4ba1-82a9-59c46b6acd35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"18d597ff-9949-4d3e-9cda-50e372c4de27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dad265f-69cf-4230-af10-dc3caeaed168","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-031000\" primary control-plane node in \"download-only-031000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd4f7a58-82cc-44ee-9ea1-38fe3a7272ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"23d77ab8-c7db-4aba-bb26-ee91d04b414e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19531-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920] Decompressors:map[bz2:0x1400000f940 gz:0x1400000f948 tar:0x1400000f8f0 tar.bz2:0x1400000f900 tar.gz:0x1400000f910 tar.xz:0x1400000f920 tar.zst:0x1400000f930 tbz2:0x1400000f900 tgz:0x140
0000f910 txz:0x1400000f920 tzst:0x1400000f930 xz:0x1400000f950 zip:0x1400000f960 zst:0x1400000f958] Getters:map[file:0x14001818550 http:0x14000578280 https:0x14000578500] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"0f12238c-b98f-498d-b15d-cd9065cfc91c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:04:35.185082    1420 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:04:35.185238    1420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:04:35.185241    1420 out.go:358] Setting ErrFile to fd 2...
	I0829 11:04:35.185244    1420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:04:35.185368    1420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	W0829 11:04:35.185476    1420 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19531-965/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19531-965/.minikube/config/config.json: no such file or directory
	I0829 11:04:35.186676    1420 out.go:352] Setting JSON to true
	I0829 11:04:35.203684    1420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":239,"bootTime":1724954436,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:04:35.203756    1420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:04:35.208758    1420 out.go:97] [download-only-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:04:35.208912    1420 notify.go:220] Checking for updates...
	W0829 11:04:35.208932    1420 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 11:04:35.212630    1420 out.go:169] MINIKUBE_LOCATION=19531
	I0829 11:04:35.215609    1420 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:04:35.220676    1420 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:04:35.223701    1420 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:04:35.226585    1420 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	W0829 11:04:35.232621    1420 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 11:04:35.232825    1420 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:04:35.237600    1420 out.go:97] Using the qemu2 driver based on user configuration
	I0829 11:04:35.237620    1420 start.go:297] selected driver: qemu2
	I0829 11:04:35.237636    1420 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:04:35.237717    1420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:04:35.241600    1420 out.go:169] Automatically selected the socket_vmnet network
	I0829 11:04:35.247379    1420 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0829 11:04:35.247473    1420 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 11:04:35.247562    1420 cni.go:84] Creating CNI manager for ""
	I0829 11:04:35.247579    1420 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 11:04:35.247628    1420 start.go:340] cluster config:
	{Name:download-only-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:04:35.252996    1420 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:04:35.255669    1420 out.go:97] Downloading VM boot image ...
	I0829 11:04:35.255689    1420 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso
	I0829 11:04:46.944240    1420 out.go:97] Starting "download-only-031000" primary control-plane node in "download-only-031000" cluster
	I0829 11:04:46.944265    1420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 11:04:46.998519    1420 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 11:04:46.998555    1420 cache.go:56] Caching tarball of preloaded images
	I0829 11:04:46.998707    1420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 11:04:47.003793    1420 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 11:04:47.003800    1420 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 11:04:47.115685    1420 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 11:05:02.241365    1420 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 11:05:02.241532    1420 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 11:05:02.938473    1420 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 11:05:02.938656    1420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/download-only-031000/config.json ...
	I0829 11:05:02.938673    1420 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/download-only-031000/config.json: {Name:mkc169edd70a2dc1a2bec2403108ab7bb4d18df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:02.938889    1420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 11:05:02.939071    1420 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0829 11:05:03.436176    1420 out.go:193] 
	W0829 11:05:03.442196    1420 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19531-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920] Decompressors:map[bz2:0x1400000f940 gz:0x1400000f948 tar:0x1400000f8f0 tar.bz2:0x1400000f900 tar.gz:0x1400000f910 tar.xz:0x1400000f920 tar.zst:0x1400000f930 tbz2:0x1400000f900 tgz:0x1400000f910 txz:0x1400000f920 tzst:0x1400000f930 xz:0x1400000f950 zip:0x1400000f960 zst:0x1400000f958] Getters:map[file:0x14001818550 http:0x14000578280 https:0x14000578500] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0829 11:05:03.442228    1420 out_reason.go:110] 
	W0829 11:05:03.451189    1420 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:05:03.455130    1420 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-031000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (28.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19531-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.11s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-200000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-200000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.963560542s)

                                                
                                                
-- stdout --
	* [offline-docker-200000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-200000" primary control-plane node in "offline-docker-200000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-200000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:52:21.960781    3879 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:52:21.960915    3879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:52:21.960918    3879 out.go:358] Setting ErrFile to fd 2...
	I0829 11:52:21.960920    3879 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:52:21.961059    3879 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:52:21.962247    3879 out.go:352] Setting JSON to false
	I0829 11:52:21.979948    3879 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3105,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:52:21.980028    3879 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:52:21.985264    3879 out.go:177] * [offline-docker-200000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:52:21.993207    3879 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:52:21.993264    3879 notify.go:220] Checking for updates...
	I0829 11:52:21.998589    3879 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:52:22.001101    3879 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:52:22.004094    3879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:52:22.007148    3879 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:52:22.010100    3879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:52:22.013483    3879 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:52:22.013543    3879 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:52:22.017056    3879 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 11:52:22.024180    3879 start.go:297] selected driver: qemu2
	I0829 11:52:22.024191    3879 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:52:22.024200    3879 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:52:22.026304    3879 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:52:22.029073    3879 out.go:177] * Automatically selected the socket_vmnet network
	I0829 11:52:22.032130    3879 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:52:22.032154    3879 cni.go:84] Creating CNI manager for ""
	I0829 11:52:22.032162    3879 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:52:22.032165    3879 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 11:52:22.032197    3879 start.go:340] cluster config:
	{Name:offline-docker-200000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:52:22.035767    3879 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:52:22.042940    3879 out.go:177] * Starting "offline-docker-200000" primary control-plane node in "offline-docker-200000" cluster
	I0829 11:52:22.047001    3879 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:52:22.047027    3879 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:52:22.047039    3879 cache.go:56] Caching tarball of preloaded images
	I0829 11:52:22.047105    3879 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:52:22.047111    3879 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:52:22.047172    3879 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/offline-docker-200000/config.json ...
	I0829 11:52:22.047186    3879 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/offline-docker-200000/config.json: {Name:mk6afef9a197e81664c0338ec1b85fbfd06aaddb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:52:22.047477    3879 start.go:360] acquireMachinesLock for offline-docker-200000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:52:22.047510    3879 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "offline-docker-200000"
	I0829 11:52:22.047520    3879 start.go:93] Provisioning new machine with config: &{Name:offline-docker-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:52:22.047557    3879 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:52:22.050981    3879 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 11:52:22.067466    3879 start.go:159] libmachine.API.Create for "offline-docker-200000" (driver="qemu2")
	I0829 11:52:22.067508    3879 client.go:168] LocalClient.Create starting
	I0829 11:52:22.067596    3879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:52:22.067634    3879 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:22.067645    3879 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:22.067690    3879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:52:22.067714    3879 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:22.067721    3879 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:22.068106    3879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:52:22.226413    3879 main.go:141] libmachine: Creating SSH key...
	I0829 11:52:22.399912    3879 main.go:141] libmachine: Creating Disk image...
	I0829 11:52:22.399921    3879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:52:22.400272    3879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2
	I0829 11:52:22.410763    3879 main.go:141] libmachine: STDOUT: 
	I0829 11:52:22.410788    3879 main.go:141] libmachine: STDERR: 
	I0829 11:52:22.410837    3879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2 +20000M
	I0829 11:52:22.423558    3879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:52:22.423591    3879 main.go:141] libmachine: STDERR: 
	I0829 11:52:22.423609    3879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2
	I0829 11:52:22.423615    3879 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:52:22.423624    3879 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:52:22.423650    3879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:57:0e:7b:5a:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2
	I0829 11:52:22.425307    3879 main.go:141] libmachine: STDOUT: 
	I0829 11:52:22.425325    3879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:52:22.425344    3879 client.go:171] duration metric: took 357.835792ms to LocalClient.Create
	I0829 11:52:24.427417    3879 start.go:128] duration metric: took 2.3798845s to createHost
	I0829 11:52:24.427442    3879 start.go:83] releasing machines lock for "offline-docker-200000", held for 2.379960167s
	W0829 11:52:24.427479    3879 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:24.440867    3879 out.go:177] * Deleting "offline-docker-200000" in qemu2 ...
	W0829 11:52:24.458791    3879 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:24.458803    3879 start.go:729] Will try again in 5 seconds ...
	I0829 11:52:29.461013    3879 start.go:360] acquireMachinesLock for offline-docker-200000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:52:29.461559    3879 start.go:364] duration metric: took 407µs to acquireMachinesLock for "offline-docker-200000"
	I0829 11:52:29.461698    3879 start.go:93] Provisioning new machine with config: &{Name:offline-docker-200000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:offline-docker-200000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:52:29.461981    3879 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:52:29.468530    3879 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 11:52:29.521099    3879 start.go:159] libmachine.API.Create for "offline-docker-200000" (driver="qemu2")
	I0829 11:52:29.521152    3879 client.go:168] LocalClient.Create starting
	I0829 11:52:29.521261    3879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:52:29.521319    3879 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:29.521335    3879 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:29.521396    3879 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:52:29.521441    3879 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:29.521455    3879 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:29.521958    3879 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:52:29.691242    3879 main.go:141] libmachine: Creating SSH key...
	I0829 11:52:29.821194    3879 main.go:141] libmachine: Creating Disk image...
	I0829 11:52:29.821203    3879 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:52:29.821370    3879 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2
	I0829 11:52:29.830604    3879 main.go:141] libmachine: STDOUT: 
	I0829 11:52:29.830632    3879 main.go:141] libmachine: STDERR: 
	I0829 11:52:29.830698    3879 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2 +20000M
	I0829 11:52:29.838668    3879 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:52:29.838684    3879 main.go:141] libmachine: STDERR: 
	I0829 11:52:29.838699    3879 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2
	I0829 11:52:29.838705    3879 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:52:29.838721    3879 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:52:29.838754    3879 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:7c:37:a5:28:7a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/offline-docker-200000/disk.qcow2
	I0829 11:52:29.840268    3879 main.go:141] libmachine: STDOUT: 
	I0829 11:52:29.840282    3879 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:52:29.840292    3879 client.go:171] duration metric: took 319.1375ms to LocalClient.Create
	I0829 11:52:31.842433    3879 start.go:128] duration metric: took 2.380448791s to createHost
	I0829 11:52:31.842487    3879 start.go:83] releasing machines lock for "offline-docker-200000", held for 2.38093625s
	W0829 11:52:31.842861    3879 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-200000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:31.863526    3879 out.go:201] 
	W0829 11:52:31.873651    3879 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:52:31.873683    3879 out.go:270] * 
	* 
	W0829 11:52:31.875988    3879 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:52:31.886468    3879 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-200000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-29 11:52:31.895766 -0700 PDT m=+2876.842878001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-200000 -n offline-docker-200000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-200000 -n offline-docker-200000: exit status 7 (49.935708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-200000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-200000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-200000
--- FAIL: TestOffline (10.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (72.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.097125ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-2xddm" [c5b53102-3848-4204-9379-99d61d77a524] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009417833s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vr87j" [3059ef24-0a76-4ac9-bc80-747fc239f276] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.009584042s
addons_test.go:342: (dbg) Run:  kubectl --context addons-048000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-048000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-048000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.058436541s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-048000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 ip
2024/08/29 11:18:23 [DEBUG] GET http://192.168.105.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p addons-048000 -n addons-048000
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 logs -n 25
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-031000 | jenkins | v1.33.1 | 29 Aug 24 11:04 PDT |                     |
	|         | -p download-only-031000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| delete  | -p download-only-031000              | download-only-031000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| start   | -o=json --download-only              | download-only-318000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT |                     |
	|         | -p download-only-318000              |                      |         |         |                     |                     |
	|         | --force --alsologtostderr            |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                      |         |         |                     |                     |
	|         | --container-runtime=docker           |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | --all                                | minikube             | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| delete  | -p download-only-318000              | download-only-318000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| delete  | -p download-only-031000              | download-only-031000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| delete  | -p download-only-318000              | download-only-318000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| start   | --download-only -p                   | binary-mirror-471000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT |                     |
	|         | binary-mirror-471000                 |                      |         |         |                     |                     |
	|         | --alsologtostderr                    |                      |         |         |                     |                     |
	|         | --binary-mirror                      |                      |         |         |                     |                     |
	|         | http://127.0.0.1:49311               |                      |         |         |                     |                     |
	|         | --driver=qemu2                       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-471000              | binary-mirror-471000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| addons  | enable dashboard -p                  | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT |                     |
	|         | addons-048000                        |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT |                     |
	|         | addons-048000                        |                      |         |         |                     |                     |
	| start   | -p addons-048000 --wait=true         | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:08 PDT |
	|         | --memory=4000 --alsologtostderr      |                      |         |         |                     |                     |
	|         | --addons=registry                    |                      |         |         |                     |                     |
	|         | --addons=metrics-server              |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress     |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                      |         |         |                     |                     |
	| addons  | addons-048000 addons disable         | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:09 PDT | 29 Aug 24 11:09 PDT |
	|         | volcano --alsologtostderr -v=1       |                      |         |         |                     |                     |
	| addons  | addons-048000 addons                 | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:17 PDT | 29 Aug 24 11:18 PDT |
	|         | disable csi-hostpath-driver          |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-048000 addons                 | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:18 PDT | 29 Aug 24 11:18 PDT |
	|         | disable volumesnapshots              |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | addons-048000 addons                 | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:18 PDT | 29 Aug 24 11:18 PDT |
	|         | disable metrics-server               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p          | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:18 PDT | 29 Aug 24 11:18 PDT |
	|         | addons-048000                        |                      |         |         |                     |                     |
	| ip      | addons-048000 ip                     | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:18 PDT | 29 Aug 24 11:18 PDT |
	| addons  | addons-048000 addons disable         | addons-048000        | jenkins | v1.33.1 | 29 Aug 24 11:18 PDT | 29 Aug 24 11:18 PDT |
	|         | registry --alsologtostderr           |                      |         |         |                     |                     |
	|         | -v=1                                 |                      |         |         |                     |                     |
	|---------|--------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 11:05:12
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 11:05:12.749097    1498 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:05:12.749218    1498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:05:12.749222    1498 out.go:358] Setting ErrFile to fd 2...
	I0829 11:05:12.749224    1498 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:05:12.749350    1498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:05:12.750373    1498 out.go:352] Setting JSON to false
	I0829 11:05:12.766233    1498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":276,"bootTime":1724954436,"procs":448,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:05:12.766294    1498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:05:12.771392    1498 out.go:177] * [addons-048000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:05:12.778555    1498 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:05:12.778608    1498 notify.go:220] Checking for updates...
	I0829 11:05:12.785472    1498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:05:12.788531    1498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:05:12.791525    1498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:05:12.794492    1498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:05:12.797525    1498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:05:12.800730    1498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:05:12.804473    1498 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 11:05:12.811536    1498 start.go:297] selected driver: qemu2
	I0829 11:05:12.811543    1498 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:05:12.811550    1498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:05:12.813858    1498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:05:12.816514    1498 out.go:177] * Automatically selected the socket_vmnet network
	I0829 11:05:12.819536    1498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:05:12.819573    1498 cni.go:84] Creating CNI manager for ""
	I0829 11:05:12.819580    1498 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:05:12.819588    1498 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 11:05:12.819619    1498 start.go:340] cluster config:
	{Name:addons-048000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:05:12.823384    1498 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:05:12.831512    1498 out.go:177] * Starting "addons-048000" primary control-plane node in "addons-048000" cluster
	I0829 11:05:12.835534    1498 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:05:12.835548    1498 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:05:12.835553    1498 cache.go:56] Caching tarball of preloaded images
	I0829 11:05:12.835615    1498 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:05:12.835621    1498 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:05:12.835803    1498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/config.json ...
	I0829 11:05:12.835813    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/config.json: {Name:mk2412d0fcebec2bd4c944fde76aa7913fd9d3a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:12.836183    1498 start.go:360] acquireMachinesLock for addons-048000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:05:12.836247    1498 start.go:364] duration metric: took 58.416µs to acquireMachinesLock for "addons-048000"
	I0829 11:05:12.836256    1498 start.go:93] Provisioning new machine with config: &{Name:addons-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:addons-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:05:12.836284    1498 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:05:12.844515    1498 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0829 11:05:13.903401    1498 start.go:159] libmachine.API.Create for "addons-048000" (driver="qemu2")
	I0829 11:05:13.903466    1498 client.go:168] LocalClient.Create starting
	I0829 11:05:13.903793    1498 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:05:13.981474    1498 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:05:14.032600    1498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:05:14.316792    1498 main.go:141] libmachine: Creating SSH key...
	I0829 11:05:14.439098    1498 main.go:141] libmachine: Creating Disk image...
	I0829 11:05:14.439103    1498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:05:14.439321    1498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/disk.qcow2
	I0829 11:05:14.531373    1498 main.go:141] libmachine: STDOUT: 
	I0829 11:05:14.531406    1498 main.go:141] libmachine: STDERR: 
	I0829 11:05:14.531475    1498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/disk.qcow2 +20000M
	I0829 11:05:14.542180    1498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:05:14.542195    1498 main.go:141] libmachine: STDERR: 
	I0829 11:05:14.542207    1498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/disk.qcow2
	I0829 11:05:14.542214    1498 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:05:14.542247    1498 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:05:14.542286    1498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:2a:b4:2e:bf:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/disk.qcow2
	I0829 11:05:14.598172    1498 main.go:141] libmachine: STDOUT: 
	I0829 11:05:14.598209    1498 main.go:141] libmachine: STDERR: 
	I0829 11:05:14.598213    1498 main.go:141] libmachine: Attempt 0
	I0829 11:05:14.598224    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:14.598279    1498 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0829 11:05:14.598300    1498 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d208d1}
	I0829 11:05:16.599454    1498 main.go:141] libmachine: Attempt 1
	I0829 11:05:16.599539    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:16.599906    1498 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0829 11:05:16.599990    1498 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d208d1}
	I0829 11:05:18.601314    1498 main.go:141] libmachine: Attempt 2
	I0829 11:05:18.601512    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:18.601883    1498 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0829 11:05:18.601992    1498 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d208d1}
	I0829 11:05:20.603164    1498 main.go:141] libmachine: Attempt 3
	I0829 11:05:20.603211    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:20.603330    1498 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0829 11:05:20.603382    1498 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d208d1}
	I0829 11:05:22.604408    1498 main.go:141] libmachine: Attempt 4
	I0829 11:05:22.604419    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:22.604526    1498 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0829 11:05:22.604547    1498 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d208d1}
	I0829 11:05:24.605571    1498 main.go:141] libmachine: Attempt 5
	I0829 11:05:24.605577    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:24.605605    1498 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0829 11:05:24.605610    1498 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d208d1}
	I0829 11:05:26.606671    1498 main.go:141] libmachine: Attempt 6
	I0829 11:05:26.606686    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:26.606748    1498 main.go:141] libmachine: Found 1 entries in /var/db/dhcpd_leases!
	I0829 11:05:26.606757    1498 main.go:141] libmachine: dhcp entry: {Name: IPAddress:192.168.106.2 HWAddress:aa:65:72:3a:1b:9b ID:1,aa:65:72:3a:1b:9b Lease:0x66d208d1}
	I0829 11:05:28.607896    1498 main.go:141] libmachine: Attempt 7
	I0829 11:05:28.607925    1498 main.go:141] libmachine: Searching for 2a:2a:b4:2e:bf:78 in /var/db/dhcpd_leases ...
	I0829 11:05:28.608048    1498 main.go:141] libmachine: Found 2 entries in /var/db/dhcpd_leases!
	I0829 11:05:28.608066    1498 main.go:141] libmachine: dhcp entry: {Name:minikube IPAddress:192.168.105.2 HWAddress:2a:2a:b4:2e:bf:78 ID:1,2a:2a:b4:2e:bf:78 Lease:0x66d209e7}
	I0829 11:05:28.608068    1498 main.go:141] libmachine: Found match: 2a:2a:b4:2e:bf:78
	I0829 11:05:28.608079    1498 main.go:141] libmachine: IP: 192.168.105.2
	I0829 11:05:28.608083    1498 main.go:141] libmachine: Waiting for VM to start (ssh -p 22 docker@192.168.105.2)...
	I0829 11:05:30.616410    1498 machine.go:93] provisionDockerMachine start ...
	I0829 11:05:30.617723    1498 main.go:141] libmachine: Using SSH client type: native
	I0829 11:05:30.617904    1498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10287c5a0] 0x10287ee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0829 11:05:30.617914    1498 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 11:05:30.672723    1498 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 11:05:30.672737    1498 buildroot.go:166] provisioning hostname "addons-048000"
	I0829 11:05:30.672783    1498 main.go:141] libmachine: Using SSH client type: native
	I0829 11:05:30.672928    1498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10287c5a0] 0x10287ee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0829 11:05:30.672934    1498 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-048000 && echo "addons-048000" | sudo tee /etc/hostname
	I0829 11:05:30.731593    1498 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-048000
	
	I0829 11:05:30.731647    1498 main.go:141] libmachine: Using SSH client type: native
	I0829 11:05:30.731762    1498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10287c5a0] 0x10287ee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0829 11:05:30.731772    1498 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-048000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-048000/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-048000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 11:05:30.784132    1498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 11:05:30.784164    1498 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19531-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19531-965/.minikube}
	I0829 11:05:30.784183    1498 buildroot.go:174] setting up certificates
	I0829 11:05:30.784187    1498 provision.go:84] configureAuth start
	I0829 11:05:30.784191    1498 provision.go:143] copyHostCerts
	I0829 11:05:30.784286    1498 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem (1082 bytes)
	I0829 11:05:30.784519    1498 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem (1123 bytes)
	I0829 11:05:30.784625    1498 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem (1675 bytes)
	I0829 11:05:30.784701    1498 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem org=jenkins.addons-048000 san=[127.0.0.1 192.168.105.2 addons-048000 localhost minikube]
	I0829 11:05:30.846415    1498 provision.go:177] copyRemoteCerts
	I0829 11:05:30.846469    1498 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 11:05:30.846486    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:30.876185    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 11:05:30.884498    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 11:05:30.892829    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0829 11:05:30.901015    1498 provision.go:87] duration metric: took 116.823917ms to configureAuth
	I0829 11:05:30.901023    1498 buildroot.go:189] setting minikube options for container-runtime
	I0829 11:05:30.901115    1498 config.go:182] Loaded profile config "addons-048000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:05:30.901148    1498 main.go:141] libmachine: Using SSH client type: native
	I0829 11:05:30.901230    1498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10287c5a0] 0x10287ee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0829 11:05:30.901235    1498 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 11:05:30.951642    1498 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0829 11:05:30.951649    1498 buildroot.go:70] root file system type: tmpfs
	I0829 11:05:30.951694    1498 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 11:05:30.951746    1498 main.go:141] libmachine: Using SSH client type: native
	I0829 11:05:30.951854    1498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10287c5a0] 0x10287ee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0829 11:05:30.951887    1498 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 11:05:31.005715    1498 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 11:05:31.005771    1498 main.go:141] libmachine: Using SSH client type: native
	I0829 11:05:31.005889    1498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10287c5a0] 0x10287ee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0829 11:05:31.005897    1498 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 11:05:32.368887    1498 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0829 11:05:32.368901    1498 machine.go:96] duration metric: took 1.752494458s to provisionDockerMachine
	I0829 11:05:32.368909    1498 client.go:171] duration metric: took 18.465604s to LocalClient.Create
	I0829 11:05:32.368922    1498 start.go:167] duration metric: took 18.465697041s to libmachine.API.Create "addons-048000"
	I0829 11:05:32.368929    1498 start.go:293] postStartSetup for "addons-048000" (driver="qemu2")
	I0829 11:05:32.368935    1498 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 11:05:32.369012    1498 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 11:05:32.369021    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:32.397477    1498 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 11:05:32.399155    1498 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 11:05:32.399163    1498 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/addons for local assets ...
	I0829 11:05:32.399255    1498 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/files for local assets ...
	I0829 11:05:32.399286    1498 start.go:296] duration metric: took 30.353167ms for postStartSetup
	I0829 11:05:32.399684    1498 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/config.json ...
	I0829 11:05:32.399860    1498 start.go:128] duration metric: took 19.563748792s to createHost
	I0829 11:05:32.399890    1498 main.go:141] libmachine: Using SSH client type: native
	I0829 11:05:32.399978    1498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x10287c5a0] 0x10287ee00 <nil>  [] 0s} 192.168.105.2 22 <nil> <nil>}
	I0829 11:05:32.399982    1498 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 11:05:32.449285    1498 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724954732.939319253
	
	I0829 11:05:32.449291    1498 fix.go:216] guest clock: 1724954732.939319253
	I0829 11:05:32.449295    1498 fix.go:229] Guest: 2024-08-29 11:05:32.939319253 -0700 PDT Remote: 2024-08-29 11:05:32.399866 -0700 PDT m=+19.670138834 (delta=539.453253ms)
	I0829 11:05:32.449306    1498 fix.go:200] guest clock delta is within tolerance: 539.453253ms
	I0829 11:05:32.449310    1498 start.go:83] releasing machines lock for "addons-048000", held for 19.613235583s
	I0829 11:05:32.449613    1498 ssh_runner.go:195] Run: cat /version.json
	I0829 11:05:32.449626    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:32.449629    1498 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 11:05:32.449676    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:32.477053    1498 ssh_runner.go:195] Run: systemctl --version
	I0829 11:05:32.527633    1498 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 11:05:32.529745    1498 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 11:05:32.529776    1498 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 11:05:32.535981    1498 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 11:05:32.535990    1498 start.go:495] detecting cgroup driver to use...
	I0829 11:05:32.536097    1498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:05:32.543234    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 11:05:32.547424    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 11:05:32.551480    1498 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 11:05:32.551507    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 11:05:32.555551    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:05:32.559616    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 11:05:32.563693    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:05:32.567805    1498 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 11:05:32.571936    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 11:05:32.576097    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 11:05:32.580030    1498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 11:05:32.584277    1498 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 11:05:32.588118    1498 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 11:05:32.592060    1498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:05:32.671996    1498 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 11:05:32.682809    1498 start.go:495] detecting cgroup driver to use...
	I0829 11:05:32.682881    1498 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 11:05:32.689049    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:05:32.696050    1498 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 11:05:32.706786    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:05:32.712110    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 11:05:32.717728    1498 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0829 11:05:32.759796    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 11:05:32.765779    1498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:05:32.772162    1498 ssh_runner.go:195] Run: which cri-dockerd
	I0829 11:05:32.773501    1498 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 11:05:32.776895    1498 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 11:05:32.782695    1498 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 11:05:32.852750    1498 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 11:05:32.946777    1498 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 11:05:32.946841    1498 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 11:05:32.953072    1498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:05:33.042212    1498 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 11:05:35.217271    1498 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.175059667s)
	I0829 11:05:35.217342    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 11:05:35.222776    1498 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0829 11:05:35.229380    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:05:35.234986    1498 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 11:05:35.327082    1498 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 11:05:35.408802    1498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:05:35.497998    1498 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 11:05:35.504781    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:05:35.510299    1498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:05:35.597798    1498 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 11:05:35.623382    1498 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 11:05:35.623469    1498 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 11:05:35.626758    1498 start.go:563] Will wait 60s for crictl version
	I0829 11:05:35.626813    1498 ssh_runner.go:195] Run: which crictl
	I0829 11:05:35.628295    1498 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 11:05:35.645265    1498 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0829 11:05:35.645343    1498 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:05:35.655292    1498 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:05:35.670358    1498 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0829 11:05:35.670447    1498 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0829 11:05:35.671989    1498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 11:05:35.676560    1498 kubeadm.go:883] updating cluster {Name:addons-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:addons-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 11:05:35.676612    1498 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:05:35.676654    1498 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:05:35.681976    1498 docker.go:685] Got preloaded images: 
	I0829 11:05:35.681984    1498 docker.go:691] registry.k8s.io/kube-apiserver:v1.31.0 wasn't preloaded
	I0829 11:05:35.682025    1498 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 11:05:35.685480    1498 ssh_runner.go:195] Run: which lz4
	I0829 11:05:35.686898    1498 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 11:05:35.688288    1498 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 11:05:35.688300    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (322549298 bytes)
	I0829 11:05:36.936083    1498 docker.go:649] duration metric: took 1.249225375s to copy over tarball
	I0829 11:05:36.936138    1498 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 11:05:37.908008    1498 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 11:05:37.922776    1498 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 11:05:37.926705    1498 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0829 11:05:37.932698    1498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:05:38.024422    1498 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 11:05:40.223342    1498 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.19892275s)
	I0829 11:05:40.223456    1498 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:05:40.229801    1498 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 11:05:40.229822    1498 cache_images.go:84] Images are preloaded, skipping loading
	I0829 11:05:40.229827    1498 kubeadm.go:934] updating node { 192.168.105.2 8443 v1.31.0 docker true true} ...
	I0829 11:05:40.229910    1498 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-048000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 11:05:40.229976    1498 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 11:05:40.254104    1498 cni.go:84] Creating CNI manager for ""
	I0829 11:05:40.254120    1498 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:05:40.254125    1498 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 11:05:40.254135    1498 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-048000 NodeName:addons-048000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 11:05:40.254206    1498 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-048000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 11:05:40.254270    1498 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 11:05:40.257867    1498 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 11:05:40.257896    1498 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 11:05:40.261038    1498 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0829 11:05:40.266798    1498 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 11:05:40.272571    1498 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0829 11:05:40.278479    1498 ssh_runner.go:195] Run: grep 192.168.105.2	control-plane.minikube.internal$ /etc/hosts
	I0829 11:05:40.279738    1498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 11:05:40.284056    1498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:05:40.357071    1498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:05:40.366829    1498 certs.go:68] Setting up /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000 for IP: 192.168.105.2
	I0829 11:05:40.366838    1498 certs.go:194] generating shared ca certs ...
	I0829 11:05:40.366846    1498 certs.go:226] acquiring lock for ca certs: {Name:mk29df1c1b696cda1cc19a90487167bb76984cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.367020    1498 certs.go:240] generating "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key
	I0829 11:05:40.497631    1498 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt ...
	I0829 11:05:40.497641    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt: {Name:mkc4743cd374b0147c94c96da34c7e0a51fdbdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.497946    1498 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key ...
	I0829 11:05:40.497949    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key: {Name:mk6980af8391c12e20fb2e50b124c003fd4f98d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.498062    1498 certs.go:240] generating "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key
	I0829 11:05:40.673911    1498 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.crt ...
	I0829 11:05:40.673921    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.crt: {Name:mk94291c56e75e2123816f6ea97cc346e30225e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.674106    1498 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key ...
	I0829 11:05:40.674109    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key: {Name:mkb58f7a861598287ab409195614d91ff6d7e1ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.674239    1498 certs.go:256] generating profile certs ...
	I0829 11:05:40.674286    1498 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.key
	I0829 11:05:40.674293    1498 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt with IP's: []
	I0829 11:05:40.933525    1498 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt ...
	I0829 11:05:40.933538    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: {Name:mkca40d9ae0c572f0c49f5f4f365866f3bb94fc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.933834    1498 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.key ...
	I0829 11:05:40.933838    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.key: {Name:mk86e1d0e8970123900125779a6b13d2f3396b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.933963    1498 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.key.ec3197e4
	I0829 11:05:40.933973    1498 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.crt.ec3197e4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.105.2]
	I0829 11:05:40.997064    1498 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.crt.ec3197e4 ...
	I0829 11:05:40.997068    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.crt.ec3197e4: {Name:mkbc8e5f21b1e7a61d826f0dcd8f3c2fc6018071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.997211    1498 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.key.ec3197e4 ...
	I0829 11:05:40.997215    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.key.ec3197e4: {Name:mkeaf5135d0a1e4918abc61606973de9542df389 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:40.997326    1498 certs.go:381] copying /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.crt.ec3197e4 -> /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.crt
	I0829 11:05:40.997533    1498 certs.go:385] copying /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.key.ec3197e4 -> /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.key
	I0829 11:05:40.997628    1498 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.key
	I0829 11:05:40.997636    1498 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.crt with IP's: []
	I0829 11:05:41.163583    1498 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.crt ...
	I0829 11:05:41.163594    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.crt: {Name:mka7385ea01241075429ed6d9781d1ccfd44aff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:41.163824    1498 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.key ...
	I0829 11:05:41.163827    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.key: {Name:mk2354903bcb22fe4caa500df0140450ea931bbe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:41.164079    1498 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 11:05:41.164105    1498 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem (1082 bytes)
	I0829 11:05:41.164122    1498 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem (1123 bytes)
	I0829 11:05:41.164139    1498 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem (1675 bytes)
	I0829 11:05:41.164539    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 11:05:41.173652    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 11:05:41.182079    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 11:05:41.190185    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 11:05:41.198458    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0829 11:05:41.206979    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 11:05:41.215479    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 11:05:41.223969    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 11:05:41.232379    1498 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 11:05:41.240746    1498 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 11:05:41.247849    1498 ssh_runner.go:195] Run: openssl version
	I0829 11:05:41.250327    1498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 11:05:41.254080    1498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:05:41.255721    1498 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:05:41.255741    1498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:05:41.257875    1498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 11:05:41.261539    1498 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 11:05:41.263002    1498 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0829 11:05:41.263049    1498 kubeadm.go:392] StartCluster: {Name:addons-048000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0
ClusterName:addons-048000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:05:41.263110    1498 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 11:05:41.268893    1498 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 11:05:41.272668    1498 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 11:05:41.276350    1498 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 11:05:41.279951    1498 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 11:05:41.279958    1498 kubeadm.go:157] found existing configuration files:
	
	I0829 11:05:41.279985    1498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0829 11:05:41.283442    1498 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 11:05:41.283466    1498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 11:05:41.286931    1498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0829 11:05:41.290094    1498 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 11:05:41.290119    1498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 11:05:41.293248    1498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0829 11:05:41.296540    1498 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 11:05:41.296566    1498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 11:05:41.300251    1498 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0829 11:05:41.303959    1498 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 11:05:41.303986    1498 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 11:05:41.307686    1498 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 11:05:41.330158    1498 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0829 11:05:41.330189    1498 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 11:05:41.368207    1498 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 11:05:41.368261    1498 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 11:05:41.368321    1498 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0829 11:05:41.372176    1498 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 11:05:41.388331    1498 out.go:235]   - Generating certificates and keys ...
	I0829 11:05:41.388362    1498 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 11:05:41.388391    1498 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 11:05:41.557570    1498 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0829 11:05:41.677616    1498 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0829 11:05:41.784368    1498 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0829 11:05:41.915112    1498 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0829 11:05:41.958000    1498 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0829 11:05:41.958068    1498 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-048000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0829 11:05:42.130021    1498 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0829 11:05:42.130085    1498 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-048000 localhost] and IPs [192.168.105.2 127.0.0.1 ::1]
	I0829 11:05:42.229365    1498 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0829 11:05:42.331045    1498 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0829 11:05:42.438199    1498 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0829 11:05:42.438236    1498 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 11:05:42.553336    1498 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 11:05:42.591568    1498 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0829 11:05:42.692167    1498 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 11:05:42.743851    1498 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 11:05:42.839055    1498 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 11:05:42.839282    1498 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 11:05:42.840552    1498 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 11:05:42.844770    1498 out.go:235]   - Booting up control plane ...
	I0829 11:05:42.844832    1498 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 11:05:42.844877    1498 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 11:05:42.844924    1498 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 11:05:42.851436    1498 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 11:05:42.854041    1498 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 11:05:42.854062    1498 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 11:05:42.954590    1498 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0829 11:05:42.954650    1498 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0829 11:05:43.455564    1498 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.153292ms
	I0829 11:05:43.455614    1498 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0829 11:05:46.957718    1498 kubeadm.go:310] [api-check] The API server is healthy after 3.502072043s
	I0829 11:05:46.967192    1498 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 11:05:46.973504    1498 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 11:05:46.983539    1498 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 11:05:46.983684    1498 kubeadm.go:310] [mark-control-plane] Marking the node addons-048000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 11:05:46.987359    1498 kubeadm.go:310] [bootstrap-token] Using token: jjdb3v.bbxz5y5c4ktww97e
	I0829 11:05:46.999713    1498 out.go:235]   - Configuring RBAC rules ...
	I0829 11:05:46.999773    1498 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 11:05:46.999815    1498 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 11:05:47.001375    1498 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 11:05:47.002392    1498 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 11:05:47.003436    1498 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 11:05:47.004791    1498 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 11:05:47.370714    1498 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 11:05:47.768445    1498 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 11:05:48.365059    1498 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 11:05:48.366725    1498 kubeadm.go:310] 
	I0829 11:05:48.366815    1498 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 11:05:48.366827    1498 kubeadm.go:310] 
	I0829 11:05:48.366993    1498 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 11:05:48.367011    1498 kubeadm.go:310] 
	I0829 11:05:48.367048    1498 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 11:05:48.367170    1498 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 11:05:48.367281    1498 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 11:05:48.367294    1498 kubeadm.go:310] 
	I0829 11:05:48.367397    1498 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 11:05:48.367414    1498 kubeadm.go:310] 
	I0829 11:05:48.367490    1498 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 11:05:48.367505    1498 kubeadm.go:310] 
	I0829 11:05:48.367589    1498 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 11:05:48.367719    1498 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 11:05:48.367821    1498 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 11:05:48.367832    1498 kubeadm.go:310] 
	I0829 11:05:48.367949    1498 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 11:05:48.368132    1498 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 11:05:48.368146    1498 kubeadm.go:310] 
	I0829 11:05:48.368347    1498 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token jjdb3v.bbxz5y5c4ktww97e \
	I0829 11:05:48.368582    1498 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a85be241893e40b79217c6f73688d370693933870156b869b3fa902a9be4179f \
	I0829 11:05:48.368623    1498 kubeadm.go:310] 	--control-plane 
	I0829 11:05:48.368633    1498 kubeadm.go:310] 
	I0829 11:05:48.368845    1498 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 11:05:48.368864    1498 kubeadm.go:310] 
	I0829 11:05:48.368987    1498 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token jjdb3v.bbxz5y5c4ktww97e \
	I0829 11:05:48.369166    1498 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a85be241893e40b79217c6f73688d370693933870156b869b3fa902a9be4179f 
	I0829 11:05:48.370160    1498 kubeadm.go:310] W0829 18:05:41.819179    1590 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 11:05:48.370619    1498 kubeadm.go:310] W0829 18:05:41.819521    1590 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0829 11:05:48.370820    1498 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 11:05:48.370845    1498 cni.go:84] Creating CNI manager for ""
	I0829 11:05:48.370870    1498 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:05:48.374046    1498 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 11:05:48.377156    1498 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 11:05:48.387990    1498 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 11:05:48.401841    1498 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 11:05:48.401957    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:48.402086    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-048000 minikube.k8s.io/updated_at=2024_08_29T11_05_48_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=addons-048000 minikube.k8s.io/primary=true
	I0829 11:05:48.415556    1498 ops.go:34] apiserver oom_adj: -16
	I0829 11:05:48.469207    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:48.970006    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:49.471398    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:49.971338    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:50.471357    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:50.969851    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:51.471334    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:51.971380    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:52.471308    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:52.971308    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:53.471215    1498 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:05:53.559320    1498 kubeadm.go:1113] duration metric: took 5.15749675s to wait for elevateKubeSystemPrivileges
	I0829 11:05:53.559338    1498 kubeadm.go:394] duration metric: took 12.2964015s to StartCluster
	I0829 11:05:53.559349    1498 settings.go:142] acquiring lock: {Name:mk4c43097bad4576952ccc223d0a8a031914c5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:53.559534    1498 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:05:53.559713    1498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/kubeconfig: {Name:mk8af293b3e18a99fbcb2b7e12f57a5251bf5686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:53.559952    1498 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0829 11:05:53.559976    1498 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:05:53.560019    1498 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0829 11:05:53.560135    1498 addons.go:69] Setting yakd=true in profile "addons-048000"
	I0829 11:05:53.560149    1498 config.go:182] Loaded profile config "addons-048000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:05:53.560156    1498 addons.go:234] Setting addon yakd=true in "addons-048000"
	I0829 11:05:53.560158    1498 addons.go:69] Setting inspektor-gadget=true in profile "addons-048000"
	I0829 11:05:53.560168    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560172    1498 addons.go:234] Setting addon inspektor-gadget=true in "addons-048000"
	I0829 11:05:53.560182    1498 addons.go:69] Setting default-storageclass=true in profile "addons-048000"
	I0829 11:05:53.560186    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560191    1498 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-048000"
	I0829 11:05:53.560234    1498 addons.go:69] Setting ingress=true in profile "addons-048000"
	I0829 11:05:53.560250    1498 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-048000"
	I0829 11:05:53.560267    1498 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-048000"
	I0829 11:05:53.560261    1498 addons.go:69] Setting storage-provisioner=true in profile "addons-048000"
	I0829 11:05:53.560277    1498 addons.go:69] Setting ingress-dns=true in profile "addons-048000"
	I0829 11:05:53.560278    1498 addons.go:69] Setting gcp-auth=true in profile "addons-048000"
	I0829 11:05:53.560284    1498 addons.go:234] Setting addon ingress-dns=true in "addons-048000"
	I0829 11:05:53.560292    1498 addons.go:234] Setting addon storage-provisioner=true in "addons-048000"
	I0829 11:05:53.560297    1498 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-048000"
	I0829 11:05:53.560302    1498 mustload.go:65] Loading cluster: addons-048000
	I0829 11:05:53.560322    1498 addons.go:69] Setting volcano=true in profile "addons-048000"
	I0829 11:05:53.560329    1498 addons.go:234] Setting addon volcano=true in "addons-048000"
	I0829 11:05:53.560332    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560270    1498 addons.go:234] Setting addon ingress=true in "addons-048000"
	I0829 11:05:53.560355    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560295    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560492    1498 retry.go:31] will retry after 990.527634ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.560501    1498 addons.go:69] Setting volumesnapshots=true in profile "addons-048000"
	I0829 11:05:53.560508    1498 addons.go:234] Setting addon volumesnapshots=true in "addons-048000"
	I0829 11:05:53.560514    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560516    1498 config.go:182] Loaded profile config "addons-048000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:05:53.560574    1498 retry.go:31] will retry after 1.243738138s: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.560248    1498 addons.go:69] Setting cloud-spanner=true in profile "addons-048000"
	I0829 11:05:53.560305    1498 addons.go:69] Setting metrics-server=true in profile "addons-048000"
	I0829 11:05:53.560706    1498 addons.go:234] Setting addon cloud-spanner=true in "addons-048000"
	I0829 11:05:53.560731    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560760    1498 retry.go:31] will retry after 625.923091ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.560304    1498 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-048000"
	I0829 11:05:53.560772    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560772    1498 retry.go:31] will retry after 593.838937ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.560759    1498 addons.go:234] Setting addon metrics-server=true in "addons-048000"
	I0829 11:05:53.560814    1498 retry.go:31] will retry after 503.447936ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.560335    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560831    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560865    1498 addons.go:69] Setting registry=true in profile "addons-048000"
	I0829 11:05:53.560872    1498 addons.go:234] Setting addon registry=true in "addons-048000"
	I0829 11:05:53.560879    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560933    1498 retry.go:31] will retry after 1.380746802s: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.560275    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:53.560976    1498 retry.go:31] will retry after 656.469318ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.560336    1498 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-048000"
	I0829 11:05:53.560987    1498 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-048000"
	I0829 11:05:53.561023    1498 retry.go:31] will retry after 1.047449792s: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.561083    1498 retry.go:31] will retry after 542.693569ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.561101    1498 retry.go:31] will retry after 1.067166418s: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.561138    1498 retry.go:31] will retry after 959.698199ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.561211    1498 retry.go:31] will retry after 693.088272ms: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.561257    1498 retry.go:31] will retry after 1.422321555s: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/monitor: connect: connection refused
	I0829 11:05:53.564510    1498 out.go:177] * Verifying Kubernetes components...
	I0829 11:05:53.572414    1498 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0829 11:05:53.576436    1498 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0829 11:05:53.576484    1498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:05:53.579463    1498 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0829 11:05:53.579470    1498 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0829 11:05:53.579478    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:53.585457    1498 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 11:05:53.585464    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0829 11:05:53.585471    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:53.640544    1498 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0829 11:05:53.691985    1498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:05:53.771698    1498 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0829 11:05:53.771710    1498 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0829 11:05:53.787485    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0829 11:05:53.788225    1498 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0829 11:05:53.788232    1498 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0829 11:05:53.819431    1498 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0829 11:05:53.819443    1498 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0829 11:05:53.885108    1498 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0829 11:05:53.885118    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0829 11:05:53.894111    1498 start.go:971] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS's ConfigMap
	I0829 11:05:53.895518    1498 node_ready.go:35] waiting up to 6m0s for node "addons-048000" to be "Ready" ...
	I0829 11:05:53.904091    1498 node_ready.go:49] node "addons-048000" has status "Ready":"True"
	I0829 11:05:53.904103    1498 node_ready.go:38] duration metric: took 8.571542ms for node "addons-048000" to be "Ready" ...
	I0829 11:05:53.904108    1498 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 11:05:53.913169    1498 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-l9rg4" in "kube-system" namespace to be "Ready" ...
	I0829 11:05:53.945314    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0829 11:05:54.071352    1498 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:05:54.074392    1498 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:05:54.074400    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 11:05:54.074410    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.109405    1498 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0829 11:05:54.113372    1498 out.go:177]   - Using image docker.io/registry:2.8.3
	I0829 11:05:54.116327    1498 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0829 11:05:54.116336    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0829 11:05:54.116347    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.158331    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0829 11:05:54.162388    1498 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0829 11:05:54.162397    1498 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0829 11:05:54.162410    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.187334    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:05:54.187794    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:54.221295    1498 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0829 11:05:54.225379    1498 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 11:05:54.225389    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0829 11:05:54.225400    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.230338    1498 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0829 11:05:54.230350    1498 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0829 11:05:54.237713    1498 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0829 11:05:54.237724    1498 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0829 11:05:54.259348    1498 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0829 11:05:54.263401    1498 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0829 11:05:54.263413    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0829 11:05:54.263423    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.290175    1498 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0829 11:05:54.290186    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0829 11:05:54.292253    1498 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0829 11:05:54.292260    1498 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0829 11:05:54.316790    1498 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0829 11:05:54.316802    1498 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0829 11:05:54.319000    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0829 11:05:54.346128    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0829 11:05:54.359064    1498 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0829 11:05:54.359078    1498 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0829 11:05:54.393313    1498 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-048000 service yakd-dashboard -n yakd-dashboard
	
	I0829 11:05:54.403574    1498 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-048000" context rescaled to 1 replicas
	I0829 11:05:54.406810    1498 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 11:05:54.406819    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0829 11:05:54.423053    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0829 11:05:54.438020    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 11:05:54.524859    1498 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0829 11:05:54.527915    1498 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0829 11:05:54.527926    1498 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0829 11:05:54.527937    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.556888    1498 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0829 11:05:54.560865    1498 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0829 11:05:54.560874    1498 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0829 11:05:54.560884    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.609857    1498 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0829 11:05:54.609867    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0829 11:05:54.612933    1498 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0829 11:05:54.622900    1498 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0829 11:05:54.629359    1498 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-048000"
	I0829 11:05:54.629379    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:54.629713    1498 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0829 11:05:54.632894    1498 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0829 11:05:54.633298    1498 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 11:05:54.633306    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0829 11:05:54.633316    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.642825    1498 out.go:177]   - Using image docker.io/busybox:stable
	I0829 11:05:54.645861    1498 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 11:05:54.645869    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0829 11:05:54.645878    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.657050    1498 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0829 11:05:54.657066    1498 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0829 11:05:54.663151    1498 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 11:05:54.663167    1498 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0829 11:05:54.670740    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0829 11:05:54.698298    1498 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0829 11:05:54.698311    1498 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0829 11:05:54.805277    1498 addons.go:234] Setting addon default-storageclass=true in "addons-048000"
	I0829 11:05:54.805299    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:05:54.805851    1498 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 11:05:54.805858    1498 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 11:05:54.805865    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.814065    1498 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0829 11:05:54.814078    1498 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0829 11:05:54.840510    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0829 11:05:54.892984    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0829 11:05:54.906330    1498 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0829 11:05:54.906347    1498 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0829 11:05:54.937260    1498 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0829 11:05:54.937273    1498 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0829 11:05:54.947398    1498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 11:05:54.951426    1498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 11:05:54.961371    1498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0829 11:05:54.965428    1498 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 11:05:54.965438    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0829 11:05:54.965449    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:54.989409    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0829 11:05:54.993459    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0829 11:05:54.997397    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0829 11:05:55.007388    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0829 11:05:55.010409    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0829 11:05:55.014430    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0829 11:05:55.018349    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0829 11:05:55.022410    1498 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0829 11:05:55.028369    1498 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0829 11:05:55.028393    1498 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0829 11:05:55.028403    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:05:55.058253    1498 addons.go:475] Verifying addon registry=true in "addons-048000"
	I0829 11:05:55.061402    1498 out.go:177] * Verifying registry addon...
	I0829 11:05:55.069647    1498 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0829 11:05:55.069658    1498 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0829 11:05:55.070861    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 11:05:55.072896    1498 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0829 11:05:55.088335    1498 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 11:05:55.088347    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:55.096048    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0829 11:05:55.255619    1498 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0829 11:05:55.255634    1498 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0829 11:05:55.438487    1498 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 11:05:55.438500    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0829 11:05:55.508530    1498 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0829 11:05:55.508545    1498 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0829 11:05:55.576750    1498 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0829 11:05:55.576760    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:55.628426    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0829 11:05:55.637547    1498 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0829 11:05:55.637562    1498 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0829 11:05:55.761396    1498 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0829 11:05:55.761408    1498 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0829 11:05:55.845812    1498 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0829 11:05:55.845826    1498 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0829 11:05:55.922694    1498 pod_ready.go:103] pod "coredns-6f6b679f8f-l9rg4" in "kube-system" namespace has status "Ready":"False"
	I0829 11:05:55.959223    1498 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0829 11:05:55.959236    1498 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0829 11:05:56.002770    1498 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0829 11:05:56.002781    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0829 11:05:56.048097    1498 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0829 11:05:56.048110    1498 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0829 11:05:56.080453    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:56.082962    1498 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0829 11:05:56.082970    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0829 11:05:56.122588    1498 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0829 11:05:56.122598    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0829 11:05:56.130722    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.45997675s)
	I0829 11:05:56.130745    1498 addons.go:475] Verifying addon metrics-server=true in "addons-048000"
	I0829 11:05:56.131827    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.69380625s)
	W0829 11:05:56.131841    1498 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 11:05:56.131854    1498 retry.go:31] will retry after 325.204464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0829 11:05:56.157030    1498 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 11:05:56.157042    1498 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0829 11:05:56.182972    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0829 11:05:56.458480    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0829 11:05:56.577050    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:57.076777    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:57.606977    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:57.936793    1498 pod_ready.go:103] pod "coredns-6f6b679f8f-l9rg4" in "kube-system" namespace has status "Ready":"False"
	I0829 11:05:58.097245    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:58.157285    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (3.316787625s)
	I0829 11:05:58.157346    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.264379416s)
	I0829 11:05:58.157381    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.086538s)
	I0829 11:05:58.157447    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (3.061413916s)
	I0829 11:05:58.157456    1498 addons.go:475] Verifying addon ingress=true in "addons-048000"
	I0829 11:05:58.157483    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (2.529064834s)
	I0829 11:05:58.165397    1498 out.go:177] * Verifying ingress addon...
	I0829 11:05:58.171725    1498 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0829 11:05:58.189440    1498 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0829 11:05:58.192940    1498 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0829 11:05:58.192948    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:05:58.526765    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.343794125s)
	I0829 11:05:58.526787    1498 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-048000"
	I0829 11:05:58.526840    1498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.0683625s)
	I0829 11:05:58.529823    1498 out.go:177] * Verifying csi-hostpath-driver addon...
	I0829 11:05:58.537195    1498 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0829 11:05:58.539726    1498 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 11:05:58.539732    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:05:58.649024    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:58.675702    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:05:59.041941    1498 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0829 11:05:59.041954    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:05:59.076932    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:59.175825    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:05:59.540807    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:05:59.640733    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:05:59.675846    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:00.041920    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:00.076537    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:00.175790    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:00.416271    1498 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-l9rg4" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-l9rg4" not found
	I0829 11:06:00.416283    1498 pod_ready.go:82] duration metric: took 6.50316125s for pod "coredns-6f6b679f8f-l9rg4" in "kube-system" namespace to be "Ready" ...
	E0829 11:06:00.416288    1498 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-l9rg4" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-l9rg4" not found
	I0829 11:06:00.416291    1498 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-rvdk4" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.418388    1498 pod_ready.go:93] pod "coredns-6f6b679f8f-rvdk4" in "kube-system" namespace has status "Ready":"True"
	I0829 11:06:00.418394    1498 pod_ready.go:82] duration metric: took 2.100125ms for pod "coredns-6f6b679f8f-rvdk4" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.418398    1498 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.420166    1498 pod_ready.go:93] pod "etcd-addons-048000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:06:00.420171    1498 pod_ready.go:82] duration metric: took 1.770583ms for pod "etcd-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.420175    1498 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.421977    1498 pod_ready.go:93] pod "kube-apiserver-addons-048000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:06:00.421983    1498 pod_ready.go:82] duration metric: took 1.804625ms for pod "kube-apiserver-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.421987    1498 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.423842    1498 pod_ready.go:93] pod "kube-controller-manager-addons-048000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:06:00.423847    1498 pod_ready.go:82] duration metric: took 1.857375ms for pod "kube-controller-manager-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.423851    1498 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nhfsw" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.540884    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:00.576492    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:00.618741    1498 pod_ready.go:93] pod "kube-proxy-nhfsw" in "kube-system" namespace has status "Ready":"True"
	I0829 11:06:00.618750    1498 pod_ready.go:82] duration metric: took 194.897167ms for pod "kube-proxy-nhfsw" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.618755    1498 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:00.675797    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:01.018658    1498 pod_ready.go:93] pod "kube-scheduler-addons-048000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:06:01.018667    1498 pod_ready.go:82] duration metric: took 399.912625ms for pod "kube-scheduler-addons-048000" in "kube-system" namespace to be "Ready" ...
	I0829 11:06:01.018671    1498 pod_ready.go:39] duration metric: took 7.114622167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 11:06:01.018704    1498 api_server.go:52] waiting for apiserver process to appear ...
	I0829 11:06:01.018765    1498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:06:01.025576    1498 api_server.go:72] duration metric: took 7.465654834s to wait for apiserver process to appear ...
	I0829 11:06:01.025586    1498 api_server.go:88] waiting for apiserver healthz status ...
	I0829 11:06:01.025598    1498 api_server.go:253] Checking apiserver healthz at https://192.168.105.2:8443/healthz ...
	I0829 11:06:01.028075    1498 api_server.go:279] https://192.168.105.2:8443/healthz returned 200:
	ok
	I0829 11:06:01.028518    1498 api_server.go:141] control plane version: v1.31.0
	I0829 11:06:01.028524    1498 api_server.go:131] duration metric: took 2.935583ms to wait for apiserver health ...
	I0829 11:06:01.028529    1498 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 11:06:01.041943    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:01.075545    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:01.175949    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:01.222340    1498 system_pods.go:59] 17 kube-system pods found
	I0829 11:06:01.222354    1498 system_pods.go:61] "coredns-6f6b679f8f-rvdk4" [67fcbd30-f94c-48e5-9601-e5bfdd79113d] Running
	I0829 11:06:01.222359    1498 system_pods.go:61] "csi-hostpath-attacher-0" [fb9138d9-d25d-449b-8496-709f6c4d5967] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 11:06:01.222362    1498 system_pods.go:61] "csi-hostpath-resizer-0" [77f98eb9-82ed-4691-8578-579e58a06a8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 11:06:01.222366    1498 system_pods.go:61] "csi-hostpathplugin-mjsnx" [252aa175-8fd6-41a6-91d6-c6f4613c380e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 11:06:01.222368    1498 system_pods.go:61] "etcd-addons-048000" [e57f47b4-05ee-4ec1-9acb-57573964d6ca] Running
	I0829 11:06:01.222370    1498 system_pods.go:61] "kube-apiserver-addons-048000" [fed4ef15-005b-49d9-aad3-c833c00e6c90] Running
	I0829 11:06:01.222372    1498 system_pods.go:61] "kube-controller-manager-addons-048000" [a4487cc2-1dc8-4035-a190-2fe2aad19ac3] Running
	I0829 11:06:01.222375    1498 system_pods.go:61] "kube-ingress-dns-minikube" [fa6eb63b-6fa1-4c76-8fdd-f4e08708a8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 11:06:01.222376    1498 system_pods.go:61] "kube-proxy-nhfsw" [4d1e0d40-7591-492f-a42f-a818d22b938e] Running
	I0829 11:06:01.222378    1498 system_pods.go:61] "kube-scheduler-addons-048000" [a111fc7b-789c-4a3c-a0c1-04618a5fac2c] Running
	I0829 11:06:01.222380    1498 system_pods.go:61] "metrics-server-8988944d9-b7srg" [13439dd5-2130-4f2a-aa96-82686d84633f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 11:06:01.222384    1498 system_pods.go:61] "nvidia-device-plugin-daemonset-t7r5k" [2700f353-f50b-4859-8e8b-852b4f080fb8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 11:06:01.222386    1498 system_pods.go:61] "registry-6fb4cdfc84-2xddm" [c5b53102-3848-4204-9379-99d61d77a524] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 11:06:01.222388    1498 system_pods.go:61] "registry-proxy-vr87j" [3059ef24-0a76-4ac9-bc80-747fc239f276] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 11:06:01.222391    1498 system_pods.go:61] "snapshot-controller-56fcc65765-28ktw" [754616ba-25fe-48eb-87bd-f2109d5484a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 11:06:01.222393    1498 system_pods.go:61] "snapshot-controller-56fcc65765-qftpg" [2e8f93ca-e751-44e4-9493-dd72dd5e3988] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 11:06:01.222395    1498 system_pods.go:61] "storage-provisioner" [7eee403f-6a9d-4527-bc52-8dbd8cd7c96e] Running
	I0829 11:06:01.222398    1498 system_pods.go:74] duration metric: took 193.868292ms to wait for pod list to return data ...
	I0829 11:06:01.222401    1498 default_sa.go:34] waiting for default service account to be created ...
	I0829 11:06:01.393652    1498 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0829 11:06:01.393668    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:06:01.418549    1498 default_sa.go:45] found service account: "default"
	I0829 11:06:01.418559    1498 default_sa.go:55] duration metric: took 196.15675ms for default service account to be created ...
	I0829 11:06:01.418563    1498 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 11:06:01.426641    1498 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0829 11:06:01.433630    1498 addons.go:234] Setting addon gcp-auth=true in "addons-048000"
	I0829 11:06:01.433652    1498 host.go:66] Checking if "addons-048000" exists ...
	I0829 11:06:01.434390    1498 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0829 11:06:01.434398    1498 sshutil.go:53] new ssh client: &{IP:192.168.105.2 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/addons-048000/id_rsa Username:docker}
	I0829 11:06:01.463853    1498 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0829 11:06:01.467197    1498 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0829 11:06:01.470200    1498 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0829 11:06:01.470206    1498 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0829 11:06:01.476462    1498 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0829 11:06:01.476469    1498 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0829 11:06:01.483424    1498 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 11:06:01.483431    1498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0829 11:06:01.492170    1498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0829 11:06:01.540112    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:01.640378    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:01.643951    1498 system_pods.go:86] 17 kube-system pods found
	I0829 11:06:01.643962    1498 system_pods.go:89] "coredns-6f6b679f8f-rvdk4" [67fcbd30-f94c-48e5-9601-e5bfdd79113d] Running
	I0829 11:06:01.643967    1498 system_pods.go:89] "csi-hostpath-attacher-0" [fb9138d9-d25d-449b-8496-709f6c4d5967] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0829 11:06:01.643970    1498 system_pods.go:89] "csi-hostpath-resizer-0" [77f98eb9-82ed-4691-8578-579e58a06a8d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0829 11:06:01.643975    1498 system_pods.go:89] "csi-hostpathplugin-mjsnx" [252aa175-8fd6-41a6-91d6-c6f4613c380e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0829 11:06:01.643978    1498 system_pods.go:89] "etcd-addons-048000" [e57f47b4-05ee-4ec1-9acb-57573964d6ca] Running
	I0829 11:06:01.643980    1498 system_pods.go:89] "kube-apiserver-addons-048000" [fed4ef15-005b-49d9-aad3-c833c00e6c90] Running
	I0829 11:06:01.643983    1498 system_pods.go:89] "kube-controller-manager-addons-048000" [a4487cc2-1dc8-4035-a190-2fe2aad19ac3] Running
	I0829 11:06:01.643987    1498 system_pods.go:89] "kube-ingress-dns-minikube" [fa6eb63b-6fa1-4c76-8fdd-f4e08708a8fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0829 11:06:01.643990    1498 system_pods.go:89] "kube-proxy-nhfsw" [4d1e0d40-7591-492f-a42f-a818d22b938e] Running
	I0829 11:06:01.643991    1498 system_pods.go:89] "kube-scheduler-addons-048000" [a111fc7b-789c-4a3c-a0c1-04618a5fac2c] Running
	I0829 11:06:01.643994    1498 system_pods.go:89] "metrics-server-8988944d9-b7srg" [13439dd5-2130-4f2a-aa96-82686d84633f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0829 11:06:01.643998    1498 system_pods.go:89] "nvidia-device-plugin-daemonset-t7r5k" [2700f353-f50b-4859-8e8b-852b4f080fb8] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0829 11:06:01.644001    1498 system_pods.go:89] "registry-6fb4cdfc84-2xddm" [c5b53102-3848-4204-9379-99d61d77a524] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0829 11:06:01.644003    1498 system_pods.go:89] "registry-proxy-vr87j" [3059ef24-0a76-4ac9-bc80-747fc239f276] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0829 11:06:01.644006    1498 system_pods.go:89] "snapshot-controller-56fcc65765-28ktw" [754616ba-25fe-48eb-87bd-f2109d5484a6] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 11:06:01.644009    1498 system_pods.go:89] "snapshot-controller-56fcc65765-qftpg" [2e8f93ca-e751-44e4-9493-dd72dd5e3988] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0829 11:06:01.644011    1498 system_pods.go:89] "storage-provisioner" [7eee403f-6a9d-4527-bc52-8dbd8cd7c96e] Running
	I0829 11:06:01.644015    1498 system_pods.go:126] duration metric: took 225.451375ms to wait for k8s-apps to be running ...
	I0829 11:06:01.644020    1498 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 11:06:01.644074    1498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 11:06:01.675700    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:01.850950    1498 system_svc.go:56] duration metric: took 206.927709ms WaitForService to wait for kubelet
	I0829 11:06:01.850966    1498 kubeadm.go:582] duration metric: took 8.291052292s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:06:01.850977    1498 node_conditions.go:102] verifying NodePressure condition ...
	I0829 11:06:01.851779    1498 addons.go:475] Verifying addon gcp-auth=true in "addons-048000"
	I0829 11:06:01.853266    1498 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 11:06:01.853277    1498 node_conditions.go:123] node cpu capacity is 2
	I0829 11:06:01.853283    1498 node_conditions.go:105] duration metric: took 2.303209ms to run NodePressure ...
	I0829 11:06:01.853290    1498 start.go:241] waiting for startup goroutines ...
	I0829 11:06:01.859399    1498 out.go:177] * Verifying gcp-auth addon...
	I0829 11:06:01.862802    1498 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0829 11:06:01.867617    1498 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 11:06:02.041022    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:02.076829    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:02.206540    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:02.543855    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:02.576947    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:02.675474    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:03.047891    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:03.079848    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:03.179574    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:03.542620    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:03.577881    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:03.674450    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:04.041443    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:04.076371    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:04.176032    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:04.541729    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:04.576220    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:04.675691    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:05.041448    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:05.076280    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:05.175820    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:05.540956    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:05.576541    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:05.675568    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:06.040606    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:06.135983    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:06.237726    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:06.541650    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:06.576836    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:06.676147    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:07.041899    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:07.076178    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:07.175809    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:07.540687    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:07.575851    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:07.675906    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:08.041951    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:08.076453    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:08.173522    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:08.541758    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:08.576427    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:08.749323    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:09.041632    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:09.076969    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:09.175608    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:09.542513    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:09.576597    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:09.674418    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:10.040600    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:10.076831    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:10.175629    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:10.541517    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:10.576249    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:10.675689    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:11.041510    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:11.076388    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:11.175764    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:11.541354    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:11.576401    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:11.675661    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:12.043592    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:12.076617    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:12.198902    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:12.541529    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:12.576524    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:12.675546    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:13.041945    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:13.077035    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:13.178688    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:13.548821    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:13.582213    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:13.675674    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:14.045011    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:14.077460    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:14.176088    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:14.549834    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:14.585189    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:14.677827    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:15.041466    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:15.076289    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:15.175361    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:15.542085    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:15.576819    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:15.676107    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:16.043806    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:16.077445    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:16.176906    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:16.544480    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:16.577558    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:16.675683    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:17.041559    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:17.076420    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:17.175625    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:17.545539    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:17.577931    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:17.675513    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:18.041335    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:18.076318    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:18.175758    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:18.545976    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:18.581327    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:18.676872    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:19.041979    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:19.076654    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:19.175583    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:19.539965    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:19.576991    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:19.677036    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:20.045499    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:20.079132    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:20.174419    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:20.541621    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:20.576504    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:20.675457    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:21.041476    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:21.081005    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:21.175961    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:21.543338    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:21.577300    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:21.676193    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:22.048218    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:22.079479    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:22.177044    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:22.543465    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:22.577609    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:22.675742    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:23.044280    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:23.077932    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:23.175368    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:23.541308    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:23.576203    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:23.676457    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:24.041850    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:24.075485    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:24.175756    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:24.541628    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:24.576220    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:24.675182    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:25.040188    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:25.076334    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:25.175655    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:25.541230    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:25.576449    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:25.675512    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:26.041152    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:26.075315    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:26.175647    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:26.541608    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:26.639631    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:26.740555    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:27.047500    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:27.078667    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0829 11:06:27.177913    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:27.546297    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:27.578733    1498 kapi.go:107] duration metric: took 32.506126833s to wait for kubernetes.io/minikube-addons=registry ...
	I0829 11:06:27.676746    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:28.041294    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:28.175514    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:28.540561    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:28.675278    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:29.043095    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:29.175701    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:29.541855    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:29.673452    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:30.041738    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:30.175667    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:30.541257    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:30.675440    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:31.042710    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:31.178062    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:31.541743    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:31.675427    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:32.041214    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:32.175495    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:32.543811    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:32.676480    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:33.046911    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:33.181573    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:33.540960    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:33.675209    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:34.041112    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:34.174304    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:34.541267    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:34.675155    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:35.041411    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:35.176260    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:35.541424    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:35.675560    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:36.044716    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:36.180806    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:36.542212    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:36.675831    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:37.041146    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:37.175497    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:37.539665    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:37.675158    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:38.041192    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:38.175344    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:38.541323    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:38.675195    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:39.045479    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:39.176333    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:39.542353    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:39.675644    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:40.046359    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:40.178102    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:40.541030    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:40.675255    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:41.040922    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:41.173492    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:41.539676    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:41.675479    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:42.040846    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:42.175380    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:42.541852    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:42.675759    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:43.042449    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:43.176787    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:43.551085    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:43.675502    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:44.041651    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:44.175311    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:44.541095    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:44.675436    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:45.041275    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:45.173563    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:45.541100    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:45.675361    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:46.042188    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:46.176668    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:46.543859    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:46.675526    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:47.041238    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:47.175647    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:47.541510    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:47.674649    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:48.041829    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:48.177535    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:48.544626    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:48.676495    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:49.041031    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:49.176836    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:49.541330    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:49.675060    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:50.041089    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:50.175575    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:50.555431    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:50.677026    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:51.044767    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:51.182001    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:51.547210    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:51.678619    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:52.041924    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:52.173634    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:52.540802    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:52.675108    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:53.040943    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:53.174952    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:53.541105    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:53.675413    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:54.043029    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:54.177796    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:54.556715    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:54.677004    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:55.043503    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:55.176097    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:55.541682    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:55.674989    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:56.041251    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:56.175409    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:56.541223    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:56.675095    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:57.042987    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:57.177077    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:57.548666    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:57.675348    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:58.042223    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:58.175604    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:58.541028    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:58.675063    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:59.039384    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:59.175196    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:06:59.541204    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:06:59.675450    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:00.041136    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:00.174761    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:00.544053    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:00.677341    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:01.040251    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:01.175225    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:01.541123    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:01.674937    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:02.040895    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:02.175033    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:02.542975    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:02.676590    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:03.046441    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:03.177945    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:03.543329    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:03.675896    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:04.040754    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:04.175249    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:04.541388    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:04.675339    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:05.039687    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:05.175126    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:05.541706    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:05.675271    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:06.040654    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:06.175350    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:06.540161    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:06.675320    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:07.040919    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:07.175281    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:07.540992    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:07.674950    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:08.040933    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:08.175158    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:08.540774    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:08.674998    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:09.040899    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:09.175198    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:09.541176    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:09.675140    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:10.041090    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:10.175037    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:10.540810    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:10.676328    1498 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0829 11:07:11.041625    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:11.175966    1498 kapi.go:107] duration metric: took 1m13.004895375s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0829 11:07:11.541464    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:12.041024    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:12.541013    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:13.041016    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:13.542481    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:14.041930    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:14.541589    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:15.041154    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:15.543547    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:16.043608    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:16.551314    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:17.041051    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:17.541410    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:18.041108    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:18.541720    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:19.041289    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:19.542313    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:20.041216    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:20.540886    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:21.040999    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:21.540971    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:22.041238    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0829 11:07:22.541800    1498 kapi.go:107] duration metric: took 1m24.005357542s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0829 11:07:23.865426    1498 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0829 11:07:23.865434    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:24.368798    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:24.867839    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:25.370822    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:25.866935    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:26.371068    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:26.867187    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:27.370773    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:27.865677    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:28.366496    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:28.864519    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:29.369898    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:29.866274    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:30.372363    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:30.866102    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:31.367833    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:31.866287    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:32.366382    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:32.867776    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:33.368368    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:33.866243    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:34.366637    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:34.867369    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:35.367221    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:35.865222    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:36.367639    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:36.864616    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:37.368299    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:37.865515    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:38.366691    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:38.866042    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:39.370835    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:39.867287    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:40.367285    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:40.867446    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:41.371785    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:41.866605    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:42.368193    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:42.867468    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:43.371335    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:43.868392    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:44.368742    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:44.866664    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:45.373052    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:45.865799    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:46.371423    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:46.866062    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:47.372477    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:47.865866    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:48.366288    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:48.865682    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:49.367798    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:49.866506    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:50.367415    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:50.866712    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:51.372763    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:51.866378    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:52.367488    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:52.866215    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:53.371423    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:53.866921    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:54.372357    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:54.866819    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:55.371763    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:55.865552    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:56.368719    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:56.866636    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:57.370067    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:57.865379    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:58.366008    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:58.866107    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:59.368527    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:07:59.866663    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:00.374005    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:00.866257    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:01.371514    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:01.870270    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:02.371422    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:02.867020    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:03.371171    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:03.866396    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:04.371849    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:04.867002    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:05.367866    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:05.866943    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:06.370150    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:06.867085    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:07.367044    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:07.865584    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:08.367103    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:08.867803    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:09.370300    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:09.872055    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:10.368666    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:10.866936    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:11.371432    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:11.867012    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:12.367041    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:12.867561    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:13.370116    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:13.866371    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:14.366561    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:14.866374    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:15.367085    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:15.866191    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:16.371484    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:16.865501    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:17.367023    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:17.865692    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:18.366686    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:18.870702    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:19.367692    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:19.866316    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:20.367258    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:20.866403    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:21.366585    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:21.866573    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:22.372107    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:22.870298    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:23.370433    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:23.865299    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:24.367470    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:24.867597    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:25.370867    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:25.867612    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:26.367210    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:26.866361    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:27.366739    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:27.865418    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:28.365482    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:28.867201    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:29.365621    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:29.864353    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:30.366103    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:30.865353    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:31.365078    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:31.865575    1498 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0829 11:08:32.369602    1498 kapi.go:107] duration metric: took 2m30.508152583s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0829 11:08:32.373474    1498 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-048000 cluster.
	I0829 11:08:32.376358    1498 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0829 11:08:32.380466    1498 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0829 11:08:32.385469    1498 out.go:177] * Enabled addons: ingress-dns, yakd, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, volcano, inspektor-gadget, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0829 11:08:32.389378    1498 addons.go:510] duration metric: took 2m38.830799458s for enable addons: enabled=[ingress-dns yakd storage-provisioner nvidia-device-plugin cloud-spanner metrics-server volcano inspektor-gadget default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0829 11:08:32.389392    1498 start.go:246] waiting for cluster config update ...
	I0829 11:08:32.389406    1498 start.go:255] writing updated cluster config ...
	I0829 11:08:32.391288    1498 ssh_runner.go:195] Run: rm -f paused
	I0829 11:08:32.550513    1498 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0829 11:08:32.554446    1498 out.go:201] 
	W0829 11:08:32.558391    1498 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0829 11:08:32.562312    1498 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0829 11:08:32.575427    1498 out.go:177] * Done! kubectl is now configured to use "addons-048000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 29 18:18:19 addons-048000 cri-dockerd[1179]: time="2024-08-29T18:18:19Z" level=info msg="Stop pulling image docker.io/nginx:alpine: Status: Downloaded newer image for nginx:alpine"
	Aug 29 18:18:19 addons-048000 dockerd[1282]: time="2024-08-29T18:18:19.612444842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 29 18:18:19 addons-048000 dockerd[1282]: time="2024-08-29T18:18:19.612476383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 29 18:18:19 addons-048000 dockerd[1282]: time="2024-08-29T18:18:19.612493507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:18:19 addons-048000 dockerd[1282]: time="2024-08-29T18:18:19.612526673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:18:23 addons-048000 dockerd[1276]: time="2024-08-29T18:18:23.369987350Z" level=info msg="ignoring event" container=48fc94d551cd68b1a166e2ef48e03c5989411df1f5d685caefba1d34874e0cca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.369981559Z" level=info msg="shim disconnected" id=48fc94d551cd68b1a166e2ef48e03c5989411df1f5d685caefba1d34874e0cca namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.370012225Z" level=warning msg="cleaning up after shim disconnected" id=48fc94d551cd68b1a166e2ef48e03c5989411df1f5d685caefba1d34874e0cca namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.370016516Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.512830150Z" level=info msg="shim disconnected" id=4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91 namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.512884482Z" level=warning msg="cleaning up after shim disconnected" id=4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91 namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.512890357Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1276]: time="2024-08-29T18:18:23.513099476Z" level=info msg="ignoring event" container=4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.554735844Z" level=info msg="shim disconnected" id=497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.554768926Z" level=warning msg="cleaning up after shim disconnected" id=497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.554773259Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1276]: time="2024-08-29T18:18:23.554906006Z" level=info msg="ignoring event" container=497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.609041421Z" level=info msg="shim disconnected" id=7c1d83eaee1b24864035c93678a56ae994d472dc681e42901db579a41d2a8a04 namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1276]: time="2024-08-29T18:18:23.609090462Z" level=info msg="ignoring event" container=7c1d83eaee1b24864035c93678a56ae994d472dc681e42901db579a41d2a8a04 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.609157585Z" level=warning msg="cleaning up after shim disconnected" id=7c1d83eaee1b24864035c93678a56ae994d472dc681e42901db579a41d2a8a04 namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.609178376Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1276]: time="2024-08-29T18:18:23.676550861Z" level=info msg="ignoring event" container=0d8be53995da2b19de07d0f947e38413cf5f4b7aab95303ea204b36a59c08da1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.676681024Z" level=info msg="shim disconnected" id=0d8be53995da2b19de07d0f947e38413cf5f4b7aab95303ea204b36a59c08da1 namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.676986600Z" level=warning msg="cleaning up after shim disconnected" id=0d8be53995da2b19de07d0f947e38413cf5f4b7aab95303ea204b36a59c08da1 namespace=moby
	Aug 29 18:18:23 addons-048000 dockerd[1282]: time="2024-08-29T18:18:23.677007349Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	5cef6d3fbef29       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158                                                4 seconds ago       Running             nginx                      0                   0712a6a2d8b53       nginx
	1f5d77a9498f4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 9 minutes ago       Running             gcp-auth                   0                   cc53b982336ce       gcp-auth-89d5ffd79-6tgp6
	441c7c7d5ff81       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                 0                   cc33052372bb4       ingress-nginx-controller-bc57996ff-vv9dw
	60877db28f13e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              patch                      0                   aa7bf4ef483a2       ingress-nginx-admission-patch-d8mvk
	05cf1c02fd52f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   11 minutes ago      Exited              create                     0                   66c6af881e449       ingress-nginx-admission-create-mlsl6
	dd4042f55f569       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       11 minutes ago      Running             local-path-provisioner     0                   c276852165810       local-path-provisioner-86d989889c-2ckzc
	497714a41c509       gcr.io/k8s-minikube/kube-registry-proxy@sha256:b3fa0b2df8737fdb85ad5918a7e2652527463e357afff83a5e5bb966bcedc367              11 minutes ago      Exited              registry-proxy             0                   0d8be53995da2       registry-proxy-vr87j
	37278a0b8e331       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc               12 minutes ago      Running             cloud-spanner-emulator     0                   f526fcc76f524       cloud-spanner-emulator-769b77f747-2gqs5
	daeff120b2637       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     12 minutes ago      Running             nvidia-device-plugin-ctr   0                   eea61c92d97a1       nvidia-device-plugin-daemonset-t7r5k
	4410a5c85d11a       registry@sha256:12120425f07de11a1b899e418d4b0ea174c8d4d572d45bdb640f93bc7ca06a3d                                             12 minutes ago      Exited              registry                   0                   7c1d83eaee1b2       registry-6fb4cdfc84-2xddm
	e3033bf808770       marcnuri/yakd@sha256:c5414196116a2266ad097b0468833b73ef1d6c7922241115fe203fb826381624                                        12 minutes ago      Running             yakd                       0                   0842fc5076de5       yakd-dashboard-67d98fc6b-fntc9
	032beb891c95e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns       0                   a5d99750711a2       kube-ingress-dns-minikube
	46c9aa4000fac       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner        0                   fb78aa5d189d9       storage-provisioner
	e0b66861880f7       2437cf7621777                                                                                                                12 minutes ago      Running             coredns                    0                   d1e683408d6f9       coredns-6f6b679f8f-rvdk4
	12c6b62da3a0f       71d55d66fd4ee                                                                                                                12 minutes ago      Running             kube-proxy                 0                   f9126e8eee8f9       kube-proxy-nhfsw
	d2989d442fb2b       fbbbd428abb4d                                                                                                                12 minutes ago      Running             kube-scheduler             0                   0bde560671b51       kube-scheduler-addons-048000
	a3376fc3007c3       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                       0                   d45b986aeff0b       etcd-addons-048000
	1fdf130a238ea       fcb0683e6bdbd                                                                                                                12 minutes ago      Running             kube-controller-manager    0                   04bb25cbec2d9       kube-controller-manager-addons-048000
	c41defbf78378       cd0f0ae0ec9e0                                                                                                                12 minutes ago      Running             kube-apiserver             0                   4766f8436e2ec       kube-apiserver-addons-048000
	
	
	==> controller_ingress [441c7c7d5ff8] <==
	I0829 18:07:10.237032       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"6dc4abb1-2489-4cfd-9409-4f7ceadb82ad", APIVersion:"v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0829 18:07:10.237065       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"c9f92bc3-ac3e-43c1-9596-18c79e8af292", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0829 18:07:11.434834       7 nginx.go:317] "Starting NGINX process"
	I0829 18:07:11.435191       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0829 18:07:11.435913       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0829 18:07:11.437238       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0829 18:07:11.449752       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0829 18:07:11.450398       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-vv9dw"
	I0829 18:07:11.464798       7 controller.go:213] "Backend successfully reloaded"
	I0829 18:07:11.464910       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0829 18:07:11.465150       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vv9dw", UID:"43de9018-8fe4-4332-83cf-e5a3ba7a9c39", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	I0829 18:07:11.472653       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-vv9dw" node="addons-048000"
	W0829 18:18:16.229365       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0829 18:18:16.238675       7 admission.go:149] processed ingress via admission controller {testedIngressLength:1 testedIngressTime:0.009s renderingIngressLength:1 renderingIngressTime:0s admissionTime:0.009s testedConfigurationSize:18.1kB}
	I0829 18:18:16.238698       7 main.go:107] "successfully validated configuration, accepting" ingress="default/nginx-ingress"
	I0829 18:18:16.241523       7 store.go:440] "Found valid IngressClass" ingress="default/nginx-ingress" ingressclass="nginx"
	W0829 18:18:16.241778       7 controller.go:1110] Error obtaining Endpoints for Service "default/nginx": no object matching key "default/nginx" in local store
	I0829 18:18:16.241844       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0829 18:18:16.241905       7 event.go:377] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"nginx-ingress", UID:"7c0fd94a-f883-49a8-a31b-85923d17afb9", APIVersion:"networking.k8s.io/v1", ResourceVersion:"2698", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
	I0829 18:18:16.263855       7 controller.go:213] "Backend successfully reloaded"
	I0829 18:18:16.264199       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vv9dw", UID:"43de9018-8fe4-4332-83cf-e5a3ba7a9c39", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	W0829 18:18:19.578284       7 controller.go:1216] Service "default/nginx" does not have any active Endpoint.
	I0829 18:18:19.578365       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0829 18:18:19.601343       7 controller.go:213] "Backend successfully reloaded"
	I0829 18:18:19.601719       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-vv9dw", UID:"43de9018-8fe4-4332-83cf-e5a3ba7a9c39", APIVersion:"v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [e0b66861880f] <==
	[INFO] 127.0.0.1:54910 - 26817 "HINFO IN 3112745777569883698.9048178716321599015. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010181949s
	[INFO] 10.244.0.8:60509 - 13641 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000090375s
	[INFO] 10.244.0.8:60509 - 32587 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097792s
	[INFO] 10.244.0.8:45693 - 46145 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030458s
	[INFO] 10.244.0.8:45693 - 34654 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000077958s
	[INFO] 10.244.0.8:45959 - 506 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039792s
	[INFO] 10.244.0.8:45959 - 53499 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000021333s
	[INFO] 10.244.0.8:60441 - 38122 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.0000645s
	[INFO] 10.244.0.8:60441 - 28138 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000026875s
	[INFO] 10.244.0.8:51405 - 32822 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000038958s
	[INFO] 10.244.0.8:51405 - 16946 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000059208s
	[INFO] 10.244.0.8:49711 - 29239 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000012584s
	[INFO] 10.244.0.8:49711 - 3888 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000056125s
	[INFO] 10.244.0.8:58716 - 372 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000011959s
	[INFO] 10.244.0.8:58716 - 12405 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000028167s
	[INFO] 10.244.0.8:52852 - 37161 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000012083s
	[INFO] 10.244.0.8:52852 - 20264 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000012875s
	[INFO] 10.244.0.25:46950 - 7143 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000149457s
	[INFO] 10.244.0.25:40020 - 1467 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000062167s
	[INFO] 10.244.0.25:59021 - 14998 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000035292s
	[INFO] 10.244.0.25:46554 - 46270 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000108041s
	[INFO] 10.244.0.25:38157 - 55658 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000034292s
	[INFO] 10.244.0.25:34685 - 30617 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000037958s
	[INFO] 10.244.0.25:58790 - 31671 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001106034s
	[INFO] 10.244.0.25:52597 - 31249 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001175076s
	
	
	==> describe nodes <==
	Name:               addons-048000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-048000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=addons-048000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T11_05_48_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-048000
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:05:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-048000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:18:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:14:29 +0000   Thu, 29 Aug 2024 18:05:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:14:29 +0000   Thu, 29 Aug 2024 18:05:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:14:29 +0000   Thu, 29 Aug 2024 18:05:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:14:29 +0000   Thu, 29 Aug 2024 18:05:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.2
	  Hostname:    addons-048000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a1f210fee9d465396233089da4e8e25
	  System UUID:                7a1f210fee9d465396233089da4e8e25
	  Boot ID:                    ae226006-53a9-4024-9ef8-fa55bcabb191
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  default                     cloud-spanner-emulator-769b77f747-2gqs5     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7s
	  default                     registry-test                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gcp-auth                    gcp-auth-89d5ffd79-6tgp6                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-vv9dw    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         12m
	  kube-system                 coredns-6f6b679f8f-rvdk4                    100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     12m
	  kube-system                 etcd-addons-048000                          100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-048000                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-048000       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nhfsw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-048000                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 nvidia-device-plugin-daemonset-t7r5k        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-2ckzc     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  yakd-dashboard              yakd-dashboard-67d98fc6b-fntc9              0 (0%)        0 (0%)      128Mi (3%)       256Mi (6%)     12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             388Mi (10%)  426Mi (11%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 12m   kube-proxy       
	  Normal  Starting                 12m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  12m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m   kubelet          Node addons-048000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m   kubelet          Node addons-048000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m   kubelet          Node addons-048000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m   kubelet          Node addons-048000 status is now: NodeReady
	  Normal  RegisteredNode           12m   node-controller  Node addons-048000 event: Registered Node addons-048000 in Controller
	
	
	==> dmesg <==
	[  +4.952197] kauditd_printk_skb: 288 callbacks suppressed
	[Aug29 18:06] kauditd_printk_skb: 54 callbacks suppressed
	[  +6.746149] kauditd_printk_skb: 4 callbacks suppressed
	[ +18.948093] kauditd_printk_skb: 4 callbacks suppressed
	[  +7.087389] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.644138] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.354242] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.548380] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:07] kauditd_printk_skb: 44 callbacks suppressed
	[ +16.355138] kauditd_printk_skb: 16 callbacks suppressed
	[ +29.433263] kauditd_printk_skb: 16 callbacks suppressed
	[Aug29 18:08] kauditd_printk_skb: 2 callbacks suppressed
	[ +22.797824] kauditd_printk_skb: 46 callbacks suppressed
	[ +21.975897] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.577549] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:09] kauditd_printk_skb: 20 callbacks suppressed
	[ +20.016619] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:12] kauditd_printk_skb: 2 callbacks suppressed
	[Aug29 18:17] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.896589] kauditd_printk_skb: 19 callbacks suppressed
	[ +25.465294] kauditd_printk_skb: 7 callbacks suppressed
	[Aug29 18:18] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.292785] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.251438] kauditd_printk_skb: 4 callbacks suppressed
	[ +12.341836] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [a3376fc3007c] <==
	{"level":"info","ts":"2024-08-29T18:05:44.618620Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T18:05:45.605074Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T18:05:45.605185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T18:05:45.605231Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgPreVoteResp from c46d288d2fcb0590 at term 1"}
	{"level":"info","ts":"2024-08-29T18:05:45.605255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T18:05:45.605264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 received MsgVoteResp from c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-08-29T18:05:45.605310Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c46d288d2fcb0590 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T18:05:45.605346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c46d288d2fcb0590 elected leader c46d288d2fcb0590 at term 2"}
	{"level":"info","ts":"2024-08-29T18:05:45.606640Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c46d288d2fcb0590","local-member-attributes":"{Name:addons-048000 ClientURLs:[https://192.168.105.2:2379]}","request-path":"/0/members/c46d288d2fcb0590/attributes","cluster-id":"6e03e7863b4f9c54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T18:05:45.606665Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:05:45.607078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:05:45.607099Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T18:05:45.606707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:05:45.607594Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6e03e7863b4f9c54","local-member-id":"c46d288d2fcb0590","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:05:45.607771Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:05:45.606720Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:05:45.608118Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:05:45.608775Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:05:45.608776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:05:45.610421Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T18:05:45.611170Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.2:2379"}
	{"level":"info","ts":"2024-08-29T18:08:54.102689Z","caller":"traceutil/trace.go:171","msg":"trace[749155615] transaction","detail":"{read_only:false; response_revision:1509; number_of_response:1; }","duration":"109.970251ms","start":"2024-08-29T18:08:53.990229Z","end":"2024-08-29T18:08:54.100200Z","steps":["trace[749155615] 'process raft request'  (duration: 109.870294ms)"],"step_count":1}
	{"level":"info","ts":"2024-08-29T18:15:45.174661Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1828}
	{"level":"info","ts":"2024-08-29T18:15:45.269614Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1828,"took":"93.675051ms","hash":2936028612,"current-db-size-bytes":8790016,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4710400,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-08-29T18:15:45.269706Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2936028612,"revision":1828,"compact-revision":-1}
	
	
	==> gcp-auth [1f5d77a9498f] <==
	2024/08/29 18:08:31 GCP Auth Webhook started!
	2024/08/29 18:08:47 Ready to marshal response ...
	2024/08/29 18:08:47 Ready to write response ...
	2024/08/29 18:08:48 Ready to marshal response ...
	2024/08/29 18:08:48 Ready to write response ...
	2024/08/29 18:09:10 Ready to marshal response ...
	2024/08/29 18:09:10 Ready to write response ...
	2024/08/29 18:09:10 Ready to marshal response ...
	2024/08/29 18:09:10 Ready to write response ...
	2024/08/29 18:09:11 Ready to marshal response ...
	2024/08/29 18:09:11 Ready to write response ...
	2024/08/29 18:17:18 Ready to marshal response ...
	2024/08/29 18:17:18 Ready to write response ...
	2024/08/29 18:17:23 Ready to marshal response ...
	2024/08/29 18:17:23 Ready to write response ...
	2024/08/29 18:17:44 Ready to marshal response ...
	2024/08/29 18:17:44 Ready to write response ...
	2024/08/29 18:18:16 Ready to marshal response ...
	2024/08/29 18:18:16 Ready to write response ...
	
	
	==> kernel <==
	 18:18:24 up 12 min,  0 users,  load average: 0.46, 0.60, 0.46
	Linux addons-048000 5.10.207 #1 SMP PREEMPT Tue Aug 27 17:57:16 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c41defbf7837] <==
	I0829 18:09:01.567316       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0829 18:09:01.716835       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0829 18:09:02.288866       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0829 18:09:02.409952       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0829 18:09:02.476069       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0829 18:09:02.613255       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0829 18:09:02.613323       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0829 18:09:02.717833       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0829 18:09:02.761984       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0829 18:17:26.465903       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0829 18:18:00.322487       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:00.322504       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:00.332993       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:00.336762       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:00.350608       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:00.350627       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0829 18:18:00.366256       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0829 18:18:00.366281       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0829 18:18:01.339618       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0829 18:18:01.370234       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0829 18:18:01.371720       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0829 18:18:10.911255       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0829 18:18:12.021987       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0829 18:18:16.239203       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0829 18:18:16.339860       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.111.247.66"}
	
	
	==> kube-controller-manager [1fdf130a238e] <==
	W0829 18:18:09.477443       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:09.477561       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:10.081781       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:10.082132       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:10.325177       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:10.325274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0829 18:18:12.022796       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:13.541905       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:13.542033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:16.446223       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:16.446251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:18.191794       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:18.191946       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:20.995645       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W0829 18:18:21.802371       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:21.802421       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:22.372366       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:22.372603       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0829 18:18:22.742569       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0829 18:18:22.742614       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0829 18:18:22.856094       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0829 18:18:22.856809       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 18:18:23.428362       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0829 18:18:23.428379       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:18:23.490673       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-6fb4cdfc84" duration="1.5µs"
	
	
	==> kube-proxy [12c6b62da3a0] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:05:54.184747       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:05:54.198937       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.2"]
	E0829 18:05:54.198971       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:05:54.223988       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:05:54.224008       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:05:54.224024       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:05:54.228298       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:05:54.228401       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:05:54.228406       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:05:54.228928       1 config.go:197] "Starting service config controller"
	I0829 18:05:54.228951       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:05:54.228965       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:05:54.228969       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:05:54.229273       1 config.go:326] "Starting node config controller"
	I0829 18:05:54.229275       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:05:54.329592       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:05:54.329630       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:05:54.329643       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [d2989d442fb2] <==
	W0829 18:05:46.177142       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:05:46.177147       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:46.177168       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:05:46.177176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:46.177193       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:05:46.177198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:46.177222       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:05:46.177229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:46.177246       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0829 18:05:46.177251       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:46.987667       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0829 18:05:46.987722       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:47.030165       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 18:05:47.030282       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:47.096579       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 18:05:47.096641       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:47.119600       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:05:47.119893       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:47.174634       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0829 18:05:47.174729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:47.183893       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:05:47.183918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0829 18:05:47.225819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:05:47.225842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0829 18:05:47.775117       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 29 18:18:16 addons-048000 kubelet[2052]: I0829 18:18:16.319265    2052 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3ba83d9-53f3-4399-b695-a537ea82efad" containerName="task-pv-container"
	Aug 29 18:18:16 addons-048000 kubelet[2052]: I0829 18:18:16.361017    2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/15267d08-9c88-4381-8d32-53fd2f55bf90-gcp-creds\") pod \"nginx\" (UID: \"15267d08-9c88-4381-8d32-53fd2f55bf90\") " pod="default/nginx"
	Aug 29 18:18:16 addons-048000 kubelet[2052]: I0829 18:18:16.361041    2052 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv42l\" (UniqueName: \"kubernetes.io/projected/15267d08-9c88-4381-8d32-53fd2f55bf90-kube-api-access-hv42l\") pod \"nginx\" (UID: \"15267d08-9c88-4381-8d32-53fd2f55bf90\") " pod="default/nginx"
	Aug 29 18:18:17 addons-048000 kubelet[2052]: E0829 18:18:17.643752    2052 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="5307fd4b-725e-41cd-8f3d-8049fba6a389"
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.273257    2052 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=4.489671122 podStartE2EDuration="7.273154341s" podCreationTimestamp="2024-08-29 18:18:16 +0000 UTC" firstStartedPulling="2024-08-29 18:18:16.730717457 +0000 UTC m=+749.145421112" lastFinishedPulling="2024-08-29 18:18:19.514200635 +0000 UTC m=+751.928904331" observedRunningTime="2024-08-29 18:18:19.930884965 +0000 UTC m=+752.345588662" watchObservedRunningTime="2024-08-29 18:18:23.273154341 +0000 UTC m=+755.687858080"
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.431698    2052 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjnhg\" (UniqueName: \"kubernetes.io/projected/5307fd4b-725e-41cd-8f3d-8049fba6a389-kube-api-access-jjnhg\") pod \"5307fd4b-725e-41cd-8f3d-8049fba6a389\" (UID: \"5307fd4b-725e-41cd-8f3d-8049fba6a389\") "
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.431732    2052 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5307fd4b-725e-41cd-8f3d-8049fba6a389-gcp-creds\") pod \"5307fd4b-725e-41cd-8f3d-8049fba6a389\" (UID: \"5307fd4b-725e-41cd-8f3d-8049fba6a389\") "
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.431772    2052 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5307fd4b-725e-41cd-8f3d-8049fba6a389-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5307fd4b-725e-41cd-8f3d-8049fba6a389" (UID: "5307fd4b-725e-41cd-8f3d-8049fba6a389"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.432572    2052 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5307fd4b-725e-41cd-8f3d-8049fba6a389-kube-api-access-jjnhg" (OuterVolumeSpecName: "kube-api-access-jjnhg") pod "5307fd4b-725e-41cd-8f3d-8049fba6a389" (UID: "5307fd4b-725e-41cd-8f3d-8049fba6a389"). InnerVolumeSpecName "kube-api-access-jjnhg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.531969    2052 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jjnhg\" (UniqueName: \"kubernetes.io/projected/5307fd4b-725e-41cd-8f3d-8049fba6a389-kube-api-access-jjnhg\") on node \"addons-048000\" DevicePath \"\""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.532000    2052 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5307fd4b-725e-41cd-8f3d-8049fba6a389-gcp-creds\") on node \"addons-048000\" DevicePath \"\""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.732889    2052 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmxcq\" (UniqueName: \"kubernetes.io/projected/3059ef24-0a76-4ac9-bc80-747fc239f276-kube-api-access-jmxcq\") pod \"3059ef24-0a76-4ac9-bc80-747fc239f276\" (UID: \"3059ef24-0a76-4ac9-bc80-747fc239f276\") "
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.732911    2052 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvf9c\" (UniqueName: \"kubernetes.io/projected/c5b53102-3848-4204-9379-99d61d77a524-kube-api-access-gvf9c\") pod \"c5b53102-3848-4204-9379-99d61d77a524\" (UID: \"c5b53102-3848-4204-9379-99d61d77a524\") "
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.733963    2052 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5b53102-3848-4204-9379-99d61d77a524-kube-api-access-gvf9c" (OuterVolumeSpecName: "kube-api-access-gvf9c") pod "c5b53102-3848-4204-9379-99d61d77a524" (UID: "c5b53102-3848-4204-9379-99d61d77a524"). InnerVolumeSpecName "kube-api-access-gvf9c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.733985    2052 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3059ef24-0a76-4ac9-bc80-747fc239f276-kube-api-access-jmxcq" (OuterVolumeSpecName: "kube-api-access-jmxcq") pod "3059ef24-0a76-4ac9-bc80-747fc239f276" (UID: "3059ef24-0a76-4ac9-bc80-747fc239f276"). InnerVolumeSpecName "kube-api-access-jmxcq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.834017    2052 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jmxcq\" (UniqueName: \"kubernetes.io/projected/3059ef24-0a76-4ac9-bc80-747fc239f276-kube-api-access-jmxcq\") on node \"addons-048000\" DevicePath \"\""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.834034    2052 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-gvf9c\" (UniqueName: \"kubernetes.io/projected/c5b53102-3848-4204-9379-99d61d77a524-kube-api-access-gvf9c\") on node \"addons-048000\" DevicePath \"\""
	Aug 29 18:18:23 addons-048000 kubelet[2052]: I0829 18:18:23.979381    2052 scope.go:117] "RemoveContainer" containerID="4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91"
	Aug 29 18:18:24 addons-048000 kubelet[2052]: I0829 18:18:24.014237    2052 scope.go:117] "RemoveContainer" containerID="4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91"
	Aug 29 18:18:24 addons-048000 kubelet[2052]: E0829 18:18:24.014886    2052 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91" containerID="4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91"
	Aug 29 18:18:24 addons-048000 kubelet[2052]: I0829 18:18:24.014923    2052 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91"} err="failed to get container status \"4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4410a5c85d11aaa66dab3ce8719bc84b984886aa1a5db38526650bb5f37bbb91"
	Aug 29 18:18:24 addons-048000 kubelet[2052]: I0829 18:18:24.014936    2052 scope.go:117] "RemoveContainer" containerID="497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba"
	Aug 29 18:18:24 addons-048000 kubelet[2052]: I0829 18:18:24.022182    2052 scope.go:117] "RemoveContainer" containerID="497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba"
	Aug 29 18:18:24 addons-048000 kubelet[2052]: E0829 18:18:24.022705    2052 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba" containerID="497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba"
	Aug 29 18:18:24 addons-048000 kubelet[2052]: I0829 18:18:24.022737    2052 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba"} err="failed to get container status \"497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba\": rpc error: code = Unknown desc = Error response from daemon: No such container: 497714a41c5098db7e2fd992481b124d9fcfa1a57ae48885260e7f70cb63aeba"
	
	
	==> storage-provisioner [46c9aa4000fa] <==
	I0829 18:05:55.321253       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:05:55.343403       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:05:55.355614       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:05:55.363917       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:05:55.364025       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-048000_c151fe5c-7fa1-4993-b236-8ae5854b1917!
	I0829 18:05:55.364526       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1abe05d6-1897-48b6-830c-17212181446e", APIVersion:"v1", ResourceVersion:"474", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-048000_c151fe5c-7fa1-4993-b236-8ae5854b1917 became leader
	I0829 18:05:55.464513       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-048000_c151fe5c-7fa1-4993-b236-8ae5854b1917!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p addons-048000 -n addons-048000
helpers_test.go:261: (dbg) Run:  kubectl --context addons-048000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-mlsl6 ingress-nginx-admission-patch-d8mvk
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-048000 describe pod busybox ingress-nginx-admission-create-mlsl6 ingress-nginx-admission-patch-d8mvk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-048000 describe pod busybox ingress-nginx-admission-create-mlsl6 ingress-nginx-admission-patch-d8mvk: exit status 1 (41.844584ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-048000/192.168.105.2
	Start Time:       Thu, 29 Aug 2024 11:09:10 -0700
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f454t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-f454t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m14s                  default-scheduler  Successfully assigned default/busybox to addons-048000
	  Normal   Pulling    7m50s (x4 over 9m13s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m49s (x4 over 9m13s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m49s (x4 over 9m13s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m24s (x6 over 9m12s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m9s (x20 over 9m12s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-mlsl6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d8mvk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-048000 describe pod busybox ingress-nginx-admission-create-mlsl6 ingress-nginx-admission-patch-d8mvk: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.30s)

                                                
                                    
x
+
TestCertOptions (10.12s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-272000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-272000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.850616209s)

                                                
                                                
-- stdout --
	* [cert-options-272000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-272000" primary control-plane node in "cert-options-272000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-272000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-272000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-272000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-272000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-272000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (78.964208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-272000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-272000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-272000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-272000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-272000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-272000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (40.141833ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-272000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-272000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-272000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-272000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-272000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-29 12:04:28.644386 -0700 PDT m=+3593.601817376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-272000 -n cert-options-272000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-272000 -n cert-options-272000: exit status 7 (29.744417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-272000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-272000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-272000
--- FAIL: TestCertOptions (10.12s)

                                                
                                    
x
+
TestCertExpiration (195.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-916000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-916000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.232617208s)

                                                
                                                
-- stdout --
	* [cert-expiration-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-916000" primary control-plane node in "cert-expiration-916000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-916000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-916000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-916000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-916000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-916000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.246618708s)

                                                
                                                
-- stdout --
	* [cert-expiration-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-916000" primary control-plane node in "cert-expiration-916000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-916000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-916000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-916000" primary control-plane node in "cert-expiration-916000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-916000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-916000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-29 12:07:18.464463 -0700 PDT m=+3763.449203626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-916000 -n cert-expiration-916000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-916000 -n cert-expiration-916000: exit status 7 (67.422541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-916000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-916000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-916000
--- FAIL: TestCertExpiration (195.63s)

                                                
                                    
x
+
TestDockerFlags (10.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.94556525s)

                                                
                                                
-- stdout --
	* [docker-flags-830000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-830000" primary control-plane node in "docker-flags-830000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-830000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:04:08.484746    4866 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:04:08.484885    4866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:08.484889    4866 out.go:358] Setting ErrFile to fd 2...
	I0829 12:04:08.484891    4866 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:08.485017    4866 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:04:08.486094    4866 out.go:352] Setting JSON to false
	I0829 12:04:08.501993    4866 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3812,"bootTime":1724954436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:04:08.502055    4866 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:04:08.505697    4866 out.go:177] * [docker-flags-830000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:04:08.512673    4866 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:04:08.512749    4866 notify.go:220] Checking for updates...
	I0829 12:04:08.519685    4866 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:04:08.522781    4866 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:04:08.525633    4866 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:04:08.528642    4866 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:04:08.531697    4866 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:04:08.533428    4866 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:08.533510    4866 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:08.533566    4866 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:04:08.537682    4866 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:04:08.544506    4866 start.go:297] selected driver: qemu2
	I0829 12:04:08.544515    4866 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:04:08.544539    4866 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:04:08.546937    4866 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:04:08.549735    4866 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:04:08.552765    4866 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0829 12:04:08.552795    4866 cni.go:84] Creating CNI manager for ""
	I0829 12:04:08.552806    4866 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:04:08.552812    4866 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:04:08.552855    4866 start.go:340] cluster config:
	{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:04:08.556750    4866 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:04:08.564682    4866 out.go:177] * Starting "docker-flags-830000" primary control-plane node in "docker-flags-830000" cluster
	I0829 12:04:08.568627    4866 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:04:08.568642    4866 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:04:08.568652    4866 cache.go:56] Caching tarball of preloaded images
	I0829 12:04:08.568715    4866 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:04:08.568720    4866 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:04:08.568782    4866 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/docker-flags-830000/config.json ...
	I0829 12:04:08.568793    4866 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/docker-flags-830000/config.json: {Name:mka890bd510e433bb7d067b13a6fa4e09caa7e35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:04:08.569004    4866 start.go:360] acquireMachinesLock for docker-flags-830000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:08.569038    4866 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "docker-flags-830000"
	I0829 12:04:08.569049    4866 start.go:93] Provisioning new machine with config: &{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:08.569078    4866 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:08.576667    4866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 12:04:08.595253    4866 start.go:159] libmachine.API.Create for "docker-flags-830000" (driver="qemu2")
	I0829 12:04:08.595284    4866 client.go:168] LocalClient.Create starting
	I0829 12:04:08.595364    4866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:08.595394    4866 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:08.595405    4866 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:08.595445    4866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:08.595469    4866 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:08.595475    4866 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:08.595831    4866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:08.757525    4866 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:08.811819    4866 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:08.811824    4866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:08.811982    4866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2
	I0829 12:04:08.821369    4866 main.go:141] libmachine: STDOUT: 
	I0829 12:04:08.821385    4866 main.go:141] libmachine: STDERR: 
	I0829 12:04:08.821443    4866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2 +20000M
	I0829 12:04:08.829420    4866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:08.829439    4866 main.go:141] libmachine: STDERR: 
	I0829 12:04:08.829452    4866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2
	I0829 12:04:08.829458    4866 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:08.829467    4866 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:08.829496    4866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:62:1f:7c:dc:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2
	I0829 12:04:08.831137    4866 main.go:141] libmachine: STDOUT: 
	I0829 12:04:08.831151    4866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:08.831168    4866 client.go:171] duration metric: took 235.881334ms to LocalClient.Create
	I0829 12:04:10.833314    4866 start.go:128] duration metric: took 2.264245667s to createHost
	I0829 12:04:10.833373    4866 start.go:83] releasing machines lock for "docker-flags-830000", held for 2.264358875s
	W0829 12:04:10.833442    4866 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:10.859497    4866 out.go:177] * Deleting "docker-flags-830000" in qemu2 ...
	W0829 12:04:10.886994    4866 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:10.887018    4866 start.go:729] Will try again in 5 seconds ...
	I0829 12:04:15.889202    4866 start.go:360] acquireMachinesLock for docker-flags-830000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:15.889738    4866 start.go:364] duration metric: took 422.292µs to acquireMachinesLock for "docker-flags-830000"
	I0829 12:04:15.889880    4866 start.go:93] Provisioning new machine with config: &{Name:docker-flags-830000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:docker-flags-830000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:15.890163    4866 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:15.899812    4866 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 12:04:15.953199    4866 start.go:159] libmachine.API.Create for "docker-flags-830000" (driver="qemu2")
	I0829 12:04:15.953244    4866 client.go:168] LocalClient.Create starting
	I0829 12:04:15.953342    4866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:15.953394    4866 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:15.953410    4866 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:15.953476    4866 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:15.953508    4866 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:15.953522    4866 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:15.954011    4866 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:16.133872    4866 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:16.330520    4866 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:16.330527    4866 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:16.330718    4866 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2
	I0829 12:04:16.340275    4866 main.go:141] libmachine: STDOUT: 
	I0829 12:04:16.340304    4866 main.go:141] libmachine: STDERR: 
	I0829 12:04:16.340350    4866 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2 +20000M
	I0829 12:04:16.348416    4866 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:16.348433    4866 main.go:141] libmachine: STDERR: 
	I0829 12:04:16.348442    4866 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2
	I0829 12:04:16.348446    4866 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:16.348458    4866 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:16.348494    4866 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:d5:11:ad:cc:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/docker-flags-830000/disk.qcow2
	I0829 12:04:16.350097    4866 main.go:141] libmachine: STDOUT: 
	I0829 12:04:16.350111    4866 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:16.350123    4866 client.go:171] duration metric: took 396.879833ms to LocalClient.Create
	I0829 12:04:18.352312    4866 start.go:128] duration metric: took 2.462144083s to createHost
	I0829 12:04:18.352397    4866 start.go:83] releasing machines lock for "docker-flags-830000", held for 2.462669708s
	W0829 12:04:18.352808    4866 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-830000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-830000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:18.367404    4866 out.go:201] 
	W0829 12:04:18.370573    4866 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:04:18.370605    4866 out.go:270] * 
	* 
	W0829 12:04:18.373153    4866 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:04:18.388515    4866 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-830000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (79.980083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-830000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-830000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-830000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-830000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-830000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-830000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-830000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.088583ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-830000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-830000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-830000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-830000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-830000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-29 12:04:18.529635 -0700 PDT m=+3583.486920960
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-830000 -n docker-flags-830000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-830000 -n docker-flags-830000: exit status 7 (30.397667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-830000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-830000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-830000
--- FAIL: TestDockerFlags (10.18s)

                                                
                                    
x
+
TestForceSystemdFlag (10.17s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-855000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-855000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.96095625s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-855000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-855000" primary control-plane node in "force-systemd-flag-855000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-855000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:03:38.998624    4719 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:03:38.998764    4719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:03:38.998767    4719 out.go:358] Setting ErrFile to fd 2...
	I0829 12:03:38.998769    4719 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:03:38.998907    4719 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:03:39.000027    4719 out.go:352] Setting JSON to false
	I0829 12:03:39.016255    4719 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3783,"bootTime":1724954436,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:03:39.016325    4719 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:03:39.019882    4719 out.go:177] * [force-systemd-flag-855000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:03:39.027836    4719 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:03:39.027889    4719 notify.go:220] Checking for updates...
	I0829 12:03:39.035684    4719 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:03:39.039760    4719 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:03:39.042860    4719 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:03:39.046718    4719 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:03:39.049771    4719 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:03:39.053135    4719 config.go:182] Loaded profile config "NoKubernetes-185000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I0829 12:03:39.053203    4719 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:03:39.053247    4719 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:03:39.057740    4719 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:03:39.064776    4719 start.go:297] selected driver: qemu2
	I0829 12:03:39.064782    4719 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:03:39.064788    4719 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:03:39.067204    4719 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:03:39.069840    4719 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:03:39.072831    4719 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 12:03:39.072846    4719 cni.go:84] Creating CNI manager for ""
	I0829 12:03:39.072853    4719 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:03:39.072857    4719 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:03:39.072884    4719 start.go:340] cluster config:
	{Name:force-systemd-flag-855000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-855000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:03:39.076588    4719 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:03:39.083792    4719 out.go:177] * Starting "force-systemd-flag-855000" primary control-plane node in "force-systemd-flag-855000" cluster
	I0829 12:03:39.087849    4719 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:03:39.087867    4719 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:03:39.087878    4719 cache.go:56] Caching tarball of preloaded images
	I0829 12:03:39.087953    4719 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:03:39.087960    4719 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:03:39.088026    4719 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/force-systemd-flag-855000/config.json ...
	I0829 12:03:39.088044    4719 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/force-systemd-flag-855000/config.json: {Name:mk79df731930b488aa8c0d92b2c8b2def51d8a0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:03:39.088276    4719 start.go:360] acquireMachinesLock for force-systemd-flag-855000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:03:39.088313    4719 start.go:364] duration metric: took 30.125µs to acquireMachinesLock for "force-systemd-flag-855000"
	I0829 12:03:39.088325    4719 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-855000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-855000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:03:39.088354    4719 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:03:39.096784    4719 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 12:03:39.115450    4719 start.go:159] libmachine.API.Create for "force-systemd-flag-855000" (driver="qemu2")
	I0829 12:03:39.115479    4719 client.go:168] LocalClient.Create starting
	I0829 12:03:39.115545    4719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:03:39.115576    4719 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:39.115586    4719 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:39.115630    4719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:03:39.115654    4719 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:39.115663    4719 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:39.116024    4719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:03:39.299275    4719 main.go:141] libmachine: Creating SSH key...
	I0829 12:03:39.383609    4719 main.go:141] libmachine: Creating Disk image...
	I0829 12:03:39.383614    4719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:03:39.383803    4719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2
	I0829 12:03:39.393112    4719 main.go:141] libmachine: STDOUT: 
	I0829 12:03:39.393126    4719 main.go:141] libmachine: STDERR: 
	I0829 12:03:39.393175    4719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2 +20000M
	I0829 12:03:39.401096    4719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:03:39.401111    4719 main.go:141] libmachine: STDERR: 
	I0829 12:03:39.401131    4719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2
	I0829 12:03:39.401138    4719 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:03:39.401153    4719 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:03:39.401184    4719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:41:dc:e9:bb:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2
	I0829 12:03:39.402791    4719 main.go:141] libmachine: STDOUT: 
	I0829 12:03:39.402808    4719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:03:39.402828    4719 client.go:171] duration metric: took 287.347792ms to LocalClient.Create
	I0829 12:03:41.405006    4719 start.go:128] duration metric: took 2.316663s to createHost
	I0829 12:03:41.405055    4719 start.go:83] releasing machines lock for "force-systemd-flag-855000", held for 2.31676675s
	W0829 12:03:41.405104    4719 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:03:41.421246    4719 out.go:177] * Deleting "force-systemd-flag-855000" in qemu2 ...
	W0829 12:03:41.453228    4719 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:03:41.453257    4719 start.go:729] Will try again in 5 seconds ...
	I0829 12:03:46.455411    4719 start.go:360] acquireMachinesLock for force-systemd-flag-855000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:03:46.455817    4719 start.go:364] duration metric: took 315.75µs to acquireMachinesLock for "force-systemd-flag-855000"
	I0829 12:03:46.455945    4719 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-855000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-flag-855000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:03:46.456268    4719 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:03:46.460942    4719 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 12:03:46.512103    4719 start.go:159] libmachine.API.Create for "force-systemd-flag-855000" (driver="qemu2")
	I0829 12:03:46.512156    4719 client.go:168] LocalClient.Create starting
	I0829 12:03:46.512279    4719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:03:46.512360    4719 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:46.512379    4719 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:46.512444    4719 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:03:46.512493    4719 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:46.512509    4719 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:46.513136    4719 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:03:46.685358    4719 main.go:141] libmachine: Creating SSH key...
	I0829 12:03:46.852480    4719 main.go:141] libmachine: Creating Disk image...
	I0829 12:03:46.852488    4719 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:03:46.852692    4719 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2
	I0829 12:03:46.862554    4719 main.go:141] libmachine: STDOUT: 
	I0829 12:03:46.862580    4719 main.go:141] libmachine: STDERR: 
	I0829 12:03:46.862633    4719 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2 +20000M
	I0829 12:03:46.870696    4719 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:03:46.870714    4719 main.go:141] libmachine: STDERR: 
	I0829 12:03:46.870729    4719 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2
	I0829 12:03:46.870746    4719 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:03:46.870758    4719 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:03:46.870787    4719 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:e5:c6:68:24:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-flag-855000/disk.qcow2
	I0829 12:03:46.872387    4719 main.go:141] libmachine: STDOUT: 
	I0829 12:03:46.872408    4719 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:03:46.872419    4719 client.go:171] duration metric: took 360.261417ms to LocalClient.Create
	I0829 12:03:48.874609    4719 start.go:128] duration metric: took 2.418337s to createHost
	I0829 12:03:48.874746    4719 start.go:83] releasing machines lock for "force-systemd-flag-855000", held for 2.418875625s
	W0829 12:03:48.875084    4719 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-855000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-855000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:03:48.891825    4719 out.go:201] 
	W0829 12:03:48.895814    4719 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:03:48.895842    4719 out.go:270] * 
	* 
	W0829 12:03:48.899101    4719 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:03:48.912888    4719 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-855000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-855000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-855000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (87.209375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-855000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-855000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-855000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-29 12:03:49.021329 -0700 PDT m=+3553.978190293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-855000 -n force-systemd-flag-855000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-855000 -n force-systemd-flag-855000: exit status 7 (34.93ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-855000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-855000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-855000
--- FAIL: TestForceSystemdFlag (10.17s)

                                                
                                    
x
+
TestForceSystemdEnv (10.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-088000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-088000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.064633458s)

                                                
                                                
-- stdout --
	* [force-systemd-env-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-088000" primary control-plane node in "force-systemd-env-088000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-088000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:03:58.215430    4820 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:03:58.215559    4820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:03:58.215562    4820 out.go:358] Setting ErrFile to fd 2...
	I0829 12:03:58.215565    4820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:03:58.215692    4820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:03:58.216761    4820 out.go:352] Setting JSON to false
	I0829 12:03:58.233509    4820 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3802,"bootTime":1724954436,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:03:58.233583    4820 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:03:58.239360    4820 out.go:177] * [force-systemd-env-088000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:03:58.246169    4820 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:03:58.246188    4820 notify.go:220] Checking for updates...
	I0829 12:03:58.253158    4820 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:03:58.256106    4820 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:03:58.259146    4820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:03:58.262144    4820 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:03:58.265137    4820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0829 12:03:58.268502    4820 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:03:58.268552    4820 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:03:58.272074    4820 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:03:58.279124    4820 start.go:297] selected driver: qemu2
	I0829 12:03:58.279129    4820 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:03:58.279135    4820 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:03:58.281326    4820 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:03:58.282525    4820 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:03:58.285181    4820 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 12:03:58.285210    4820 cni.go:84] Creating CNI manager for ""
	I0829 12:03:58.285216    4820 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:03:58.285222    4820 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:03:58.285250    4820 start.go:340] cluster config:
	{Name:force-systemd-env-088000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:03:58.288728    4820 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:03:58.297119    4820 out.go:177] * Starting "force-systemd-env-088000" primary control-plane node in "force-systemd-env-088000" cluster
	I0829 12:03:58.301118    4820 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:03:58.301131    4820 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:03:58.301138    4820 cache.go:56] Caching tarball of preloaded images
	I0829 12:03:58.301186    4820 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:03:58.301191    4820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:03:58.301247    4820 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/force-systemd-env-088000/config.json ...
	I0829 12:03:58.301259    4820 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/force-systemd-env-088000/config.json: {Name:mkff8153bc6e5b99e10574b52cb87fe67b3fba86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:03:58.301461    4820 start.go:360] acquireMachinesLock for force-systemd-env-088000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:03:58.301491    4820 start.go:364] duration metric: took 24.167µs to acquireMachinesLock for "force-systemd-env-088000"
	I0829 12:03:58.301502    4820 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:03:58.301531    4820 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:03:58.309099    4820 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 12:03:58.326305    4820 start.go:159] libmachine.API.Create for "force-systemd-env-088000" (driver="qemu2")
	I0829 12:03:58.326333    4820 client.go:168] LocalClient.Create starting
	I0829 12:03:58.326396    4820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:03:58.326424    4820 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:58.326434    4820 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:58.326468    4820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:03:58.326491    4820 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:58.326499    4820 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:58.326876    4820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:03:58.510359    4820 main.go:141] libmachine: Creating SSH key...
	I0829 12:03:58.629243    4820 main.go:141] libmachine: Creating Disk image...
	I0829 12:03:58.629256    4820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:03:58.629472    4820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2
	I0829 12:03:58.639302    4820 main.go:141] libmachine: STDOUT: 
	I0829 12:03:58.639325    4820 main.go:141] libmachine: STDERR: 
	I0829 12:03:58.639375    4820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2 +20000M
	I0829 12:03:58.647651    4820 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:03:58.647669    4820 main.go:141] libmachine: STDERR: 
	I0829 12:03:58.647680    4820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2
	I0829 12:03:58.647686    4820 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:03:58.647698    4820 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:03:58.647746    4820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:ae:a4:a4:c6:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2
	I0829 12:03:58.649394    4820 main.go:141] libmachine: STDOUT: 
	I0829 12:03:58.649415    4820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:03:58.649438    4820 client.go:171] duration metric: took 323.102416ms to LocalClient.Create
	I0829 12:04:00.651739    4820 start.go:128] duration metric: took 2.350191834s to createHost
	I0829 12:04:00.651878    4820 start.go:83] releasing machines lock for "force-systemd-env-088000", held for 2.350410166s
	W0829 12:04:00.651935    4820 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:00.663308    4820 out.go:177] * Deleting "force-systemd-env-088000" in qemu2 ...
	W0829 12:04:00.697156    4820 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:00.697190    4820 start.go:729] Will try again in 5 seconds ...
	I0829 12:04:05.699292    4820 start.go:360] acquireMachinesLock for force-systemd-env-088000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:05.699743    4820 start.go:364] duration metric: took 365.625µs to acquireMachinesLock for "force-systemd-env-088000"
	I0829 12:04:05.699916    4820 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-088000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.0 ClusterName:force-systemd-env-088000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:05.700246    4820 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:05.709666    4820 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 12:04:05.760521    4820 start.go:159] libmachine.API.Create for "force-systemd-env-088000" (driver="qemu2")
	I0829 12:04:05.760572    4820 client.go:168] LocalClient.Create starting
	I0829 12:04:05.760697    4820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:05.760776    4820 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:05.760794    4820 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:05.760866    4820 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:05.760911    4820 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:05.760934    4820 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:05.761521    4820 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:05.942230    4820 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:06.178628    4820 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:06.178637    4820 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:06.178891    4820 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2
	I0829 12:04:06.188757    4820 main.go:141] libmachine: STDOUT: 
	I0829 12:04:06.188782    4820 main.go:141] libmachine: STDERR: 
	I0829 12:04:06.188831    4820 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2 +20000M
	I0829 12:04:06.197121    4820 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:06.197138    4820 main.go:141] libmachine: STDERR: 
	I0829 12:04:06.197156    4820 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2
	I0829 12:04:06.197161    4820 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:06.197170    4820 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:06.197198    4820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:47:9c:00:d0:3b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/force-systemd-env-088000/disk.qcow2
	I0829 12:04:06.198819    4820 main.go:141] libmachine: STDOUT: 
	I0829 12:04:06.198836    4820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:06.198850    4820 client.go:171] duration metric: took 438.279833ms to LocalClient.Create
	I0829 12:04:08.201012    4820 start.go:128] duration metric: took 2.500767208s to createHost
	I0829 12:04:08.201075    4820 start.go:83] releasing machines lock for "force-systemd-env-088000", held for 2.50134225s
	W0829 12:04:08.201480    4820 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-088000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:08.211067    4820 out.go:201] 
	W0829 12:04:08.221297    4820 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:04:08.221379    4820 out.go:270] * 
	* 
	W0829 12:04:08.224247    4820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:04:08.234047    4820 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-088000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-088000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-088000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (76.991042ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-088000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-088000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-088000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-29 12:04:08.328819 -0700 PDT m=+3573.285958001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-088000 -n force-systemd-env-088000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-088000 -n force-systemd-env-088000: exit status 7 (32.820583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-088000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-088000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-088000
--- FAIL: TestForceSystemdEnv (10.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-312000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-312000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-zx8nc" [e04f2ba0-f5ea-43c6-8d83-acd4d77ecd75] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0829 11:23:32.584156    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:32.592444    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:32.604016    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:32.627352    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:32.670694    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:32.754063    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:32.917503    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:33.240965    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-zx8nc" [e04f2ba0-f5ea-43c6-8d83-acd4d77ecd75] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.008354083s
functional_test.go:1649: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.105.4:31418
functional_test.go:1661: error fetching http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
E0829 11:23:42.853738    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1661: error fetching http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
functional_test.go:1661: error fetching http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
functional_test.go:1681: failed to fetch http://192.168.105.4:31418: Get "http://192.168.105.4:31418": dial tcp 192.168.105.4:31418: connect: connection refused
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-312000 describe po hello-node-connect
functional_test.go:1606: hello-node pod describe:
Name:             hello-node-connect-65d86f57f4-zx8nc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-312000/192.168.105.4
Start Time:       Thu, 29 Aug 2024 11:23:31 -0700
Labels:           app=hello-node-connect
pod-template-hash=65d86f57f4
Annotations:      <none>
Status:           Running
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-65d86f57f4
Containers:
echoserver-arm:
Container ID:   docker://0650380888a9de102cd52e0446d3e56910d86dfb91e33ee9c1c3a25938ee9bf1
Image:          registry.k8s.io/echoserver-arm:1.8
Image ID:       docker-pullable://registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
Port:           <none>
Host Port:      <none>
State:          Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 29 Aug 2024 11:23:47 -0700
Finished:     Thu, 29 Aug 2024 11:23:47 -0700
Last State:     Terminated
Reason:       Error
Exit Code:    1
Started:      Thu, 29 Aug 2024 11:23:32 -0700
Finished:     Thu, 29 Aug 2024 11:23:32 -0700
Ready:          False
Restart Count:  2
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fgjgr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fgjgr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                From               Message
----     ------     ----               ----               -------
Normal   Scheduled  28s                default-scheduler  Successfully assigned default/hello-node-connect-65d86f57f4-zx8nc to functional-312000
Normal   Pulled     12s (x3 over 27s)  kubelet            Container image "registry.k8s.io/echoserver-arm:1.8" already present on machine
Normal   Created    12s (x3 over 27s)  kubelet            Created container echoserver-arm
Normal   Started    12s (x3 over 27s)  kubelet            Started container echoserver-arm
Warning  BackOff    11s (x2 over 26s)  kubelet            Back-off restarting failed container echoserver-arm in pod hello-node-connect-65d86f57f4-zx8nc_default(e04f2ba0-f5ea-43c6-8d83-acd4d77ecd75)

                                                
                                                
functional_test.go:1608: (dbg) Run:  kubectl --context functional-312000 logs -l app=hello-node-connect
functional_test.go:1612: hello-node logs:
exec /usr/sbin/nginx: exec format error
functional_test.go:1614: (dbg) Run:  kubectl --context functional-312000 describe svc hello-node-connect
functional_test.go:1618: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.111.70.57
IPs:                      10.111.70.57
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31418/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-312000 -n functional-312000
helpers_test.go:244: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 logs -n 25
helpers_test.go:252: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                                      Args                                                       |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image   | functional-312000 image ls                                                                                      | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	| image   | functional-312000 image save                                                                                    | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | kicbase/echo-server:functional-312000                                                                           |                   |         |         |                     |                     |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-312000 image rm                                                                                      | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | kicbase/echo-server:functional-312000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-312000 image ls                                                                                      | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	| image   | functional-312000 image load                                                                                    | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | /Users/jenkins/workspace/echo-server-save.tar                                                                   |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| image   | functional-312000 image ls                                                                                      | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	| image   | functional-312000 image save --daemon                                                                           | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | kicbase/echo-server:functional-312000                                                                           |                   |         |         |                     |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| ssh     | functional-312000 ssh echo                                                                                      | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | hello                                                                                                           |                   |         |         |                     |                     |
	| ssh     | functional-312000 ssh cat                                                                                       | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | /etc/hostname                                                                                                   |                   |         |         |                     |                     |
	| tunnel  | functional-312000 tunnel                                                                                        | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-312000 tunnel                                                                                        | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| tunnel  | functional-312000 tunnel                                                                                        | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT |                     |
	|         | --alsologtostderr                                                                                               |                   |         |         |                     |                     |
	| service | functional-312000 service list                                                                                  | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	| service | functional-312000 service list                                                                                  | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-312000 service                                                                                       | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | --namespace=default --https                                                                                     |                   |         |         |                     |                     |
	|         | --url hello-node                                                                                                |                   |         |         |                     |                     |
	| service | functional-312000                                                                                               | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | service hello-node --url                                                                                        |                   |         |         |                     |                     |
	|         | --format={{.IP}}                                                                                                |                   |         |         |                     |                     |
	| service | functional-312000 service                                                                                       | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | hello-node --url                                                                                                |                   |         |         |                     |                     |
	| addons  | functional-312000 addons list                                                                                   | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	| addons  | functional-312000 addons list                                                                                   | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | -o json                                                                                                         |                   |         |         |                     |                     |
	| service | functional-312000 service                                                                                       | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | hello-node-connect --url                                                                                        |                   |         |         |                     |                     |
	| ssh     | functional-312000 ssh findmnt                                                                                   | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT |                     |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| mount   | -p functional-312000                                                                                            | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1417900404/001:/mount-9p |                   |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-312000 ssh findmnt                                                                                   | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | -T /mount-9p | grep 9p                                                                                          |                   |         |         |                     |                     |
	| ssh     | functional-312000 ssh -- ls                                                                                     | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | -la /mount-9p                                                                                                   |                   |         |         |                     |                     |
	| ssh     | functional-312000 ssh cat                                                                                       | functional-312000 | jenkins | v1.33.1 | 29 Aug 24 11:23 PDT | 29 Aug 24 11:23 PDT |
	|         | /mount-9p/test-1724955835901787000                                                                              |                   |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 11:22:35
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 11:22:35.736753    2025 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:22:35.736866    2025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:22:35.736868    2025 out.go:358] Setting ErrFile to fd 2...
	I0829 11:22:35.736869    2025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:22:35.736978    2025 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:22:35.738016    2025 out.go:352] Setting JSON to false
	I0829 11:22:35.753985    2025 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1319,"bootTime":1724954436,"procs":462,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:22:35.754043    2025 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:22:35.759502    2025 out.go:177] * [functional-312000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:22:35.768521    2025 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:22:35.768604    2025 notify.go:220] Checking for updates...
	I0829 11:22:35.775506    2025 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:22:35.778506    2025 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:22:35.781497    2025 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:22:35.784543    2025 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:22:35.787430    2025 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:22:35.790766    2025 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:22:35.790814    2025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:22:35.795536    2025 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:22:35.802490    2025 start.go:297] selected driver: qemu2
	I0829 11:22:35.802494    2025 start.go:901] validating driver "qemu2" against &{Name:functional-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:22:35.802543    2025 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:22:35.804731    2025 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:22:35.804773    2025 cni.go:84] Creating CNI manager for ""
	I0829 11:22:35.804780    2025 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:22:35.804825    2025 start.go:340] cluster config:
	{Name:functional-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-312000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:22:35.808290    2025 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:22:35.814476    2025 out.go:177] * Starting "functional-312000" primary control-plane node in "functional-312000" cluster
	I0829 11:22:35.818516    2025 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:22:35.818526    2025 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:22:35.818532    2025 cache.go:56] Caching tarball of preloaded images
	I0829 11:22:35.818581    2025 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:22:35.818585    2025 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:22:35.818627    2025 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/config.json ...
	I0829 11:22:35.819048    2025 start.go:360] acquireMachinesLock for functional-312000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:22:35.819077    2025 start.go:364] duration metric: took 25.542µs to acquireMachinesLock for "functional-312000"
	I0829 11:22:35.819088    2025 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:22:35.819092    2025 fix.go:54] fixHost starting: 
	I0829 11:22:35.819650    2025 fix.go:112] recreateIfNeeded on functional-312000: state=Running err=<nil>
	W0829 11:22:35.819655    2025 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:22:35.824494    2025 out.go:177] * Updating the running qemu2 "functional-312000" VM ...
	I0829 11:22:35.832346    2025 machine.go:93] provisionDockerMachine start ...
	I0829 11:22:35.832380    2025 main.go:141] libmachine: Using SSH client type: native
	I0829 11:22:35.832497    2025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c685a0] 0x100c6ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0829 11:22:35.832500    2025 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 11:22:35.886809    2025 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-312000
	
	I0829 11:22:35.886820    2025 buildroot.go:166] provisioning hostname "functional-312000"
	I0829 11:22:35.886862    2025 main.go:141] libmachine: Using SSH client type: native
	I0829 11:22:35.886984    2025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c685a0] 0x100c6ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0829 11:22:35.886987    2025 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-312000 && echo "functional-312000" | sudo tee /etc/hostname
	I0829 11:22:35.945242    2025 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-312000
	
	I0829 11:22:35.945294    2025 main.go:141] libmachine: Using SSH client type: native
	I0829 11:22:35.945414    2025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c685a0] 0x100c6ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0829 11:22:35.945420    2025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-312000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-312000/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-312000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 11:22:36.010936    2025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 11:22:36.010945    2025 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19531-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19531-965/.minikube}
	I0829 11:22:36.010952    2025 buildroot.go:174] setting up certificates
	I0829 11:22:36.010956    2025 provision.go:84] configureAuth start
	I0829 11:22:36.010964    2025 provision.go:143] copyHostCerts
	I0829 11:22:36.011069    2025 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem, removing ...
	I0829 11:22:36.011074    2025 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem
	I0829 11:22:36.011207    2025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem (1082 bytes)
	I0829 11:22:36.011379    2025 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem, removing ...
	I0829 11:22:36.011380    2025 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem
	I0829 11:22:36.011431    2025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem (1123 bytes)
	I0829 11:22:36.011537    2025 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem, removing ...
	I0829 11:22:36.011538    2025 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem
	I0829 11:22:36.011582    2025 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem (1675 bytes)
	I0829 11:22:36.011660    2025 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem org=jenkins.functional-312000 san=[127.0.0.1 192.168.105.4 functional-312000 localhost minikube]
	I0829 11:22:36.131264    2025 provision.go:177] copyRemoteCerts
	I0829 11:22:36.131304    2025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 11:22:36.131311    2025 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
	I0829 11:22:36.161383    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 11:22:36.169848    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 11:22:36.177950    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0829 11:22:36.186011    2025 provision.go:87] duration metric: took 175.052083ms to configureAuth
	I0829 11:22:36.186017    2025 buildroot.go:189] setting minikube options for container-runtime
	I0829 11:22:36.186112    2025 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:22:36.186159    2025 main.go:141] libmachine: Using SSH client type: native
	I0829 11:22:36.186252    2025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c685a0] 0x100c6ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0829 11:22:36.186254    2025 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 11:22:36.244097    2025 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0829 11:22:36.244105    2025 buildroot.go:70] root file system type: tmpfs
	I0829 11:22:36.244160    2025 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 11:22:36.244223    2025 main.go:141] libmachine: Using SSH client type: native
	I0829 11:22:36.244336    2025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c685a0] 0x100c6ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0829 11:22:36.244367    2025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 11:22:36.303665    2025 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 11:22:36.303711    2025 main.go:141] libmachine: Using SSH client type: native
	I0829 11:22:36.303821    2025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c685a0] 0x100c6ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0829 11:22:36.303827    2025 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 11:22:36.360120    2025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 11:22:36.360127    2025 machine.go:96] duration metric: took 527.781625ms to provisionDockerMachine
	I0829 11:22:36.360131    2025 start.go:293] postStartSetup for "functional-312000" (driver="qemu2")
	I0829 11:22:36.360136    2025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 11:22:36.360181    2025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 11:22:36.360188    2025 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
	I0829 11:22:36.390233    2025 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 11:22:36.391742    2025 info.go:137] Remote host: Buildroot 2023.02.9
	I0829 11:22:36.391747    2025 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/addons for local assets ...
	I0829 11:22:36.391832    2025 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/files for local assets ...
	I0829 11:22:36.391958    2025 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem -> 14182.pem in /etc/ssl/certs
	I0829 11:22:36.392073    2025 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/test/nested/copy/1418/hosts -> hosts in /etc/test/nested/copy/1418
	I0829 11:22:36.392107    2025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1418
	I0829 11:22:36.395310    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem --> /etc/ssl/certs/14182.pem (1708 bytes)
	I0829 11:22:36.402785    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/test/nested/copy/1418/hosts --> /etc/test/nested/copy/1418/hosts (40 bytes)
	I0829 11:22:36.410886    2025 start.go:296] duration metric: took 50.750875ms for postStartSetup
	I0829 11:22:36.410897    2025 fix.go:56] duration metric: took 591.811625ms for fixHost
	I0829 11:22:36.410932    2025 main.go:141] libmachine: Using SSH client type: native
	I0829 11:22:36.411034    2025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x100c685a0] 0x100c6ae00 <nil>  [] 0s} 192.168.105.4 22 <nil> <nil>}
	I0829 11:22:36.411036    2025 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 11:22:36.468144    2025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724955756.458871192
	
	I0829 11:22:36.468150    2025 fix.go:216] guest clock: 1724955756.458871192
	I0829 11:22:36.468153    2025 fix.go:229] Guest: 2024-08-29 11:22:36.458871192 -0700 PDT Remote: 2024-08-29 11:22:36.410898 -0700 PDT m=+0.693424084 (delta=47.973192ms)
	I0829 11:22:36.468163    2025 fix.go:200] guest clock delta is within tolerance: 47.973192ms
	I0829 11:22:36.468165    2025 start.go:83] releasing machines lock for "functional-312000", held for 649.090833ms
	I0829 11:22:36.468493    2025 ssh_runner.go:195] Run: cat /version.json
	I0829 11:22:36.468500    2025 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
	I0829 11:22:36.468525    2025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 11:22:36.468542    2025 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
	I0829 11:22:36.539340    2025 ssh_runner.go:195] Run: systemctl --version
	I0829 11:22:36.541610    2025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 11:22:36.543519    2025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 11:22:36.543541    2025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0829 11:22:36.547094    2025 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0829 11:22:36.547100    2025 start.go:495] detecting cgroup driver to use...
	I0829 11:22:36.547164    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:22:36.553726    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0829 11:22:36.557686    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 11:22:36.561580    2025 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 11:22:36.561604    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 11:22:36.565491    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:22:36.569565    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 11:22:36.573558    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:22:36.577425    2025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 11:22:36.581259    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 11:22:36.585215    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 11:22:36.589009    2025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 11:22:36.592815    2025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 11:22:36.596474    2025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 11:22:36.599990    2025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:22:36.694279    2025 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 11:22:36.701614    2025 start.go:495] detecting cgroup driver to use...
	I0829 11:22:36.701669    2025 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 11:22:36.710260    2025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:22:36.716248    2025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 11:22:36.724476    2025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:22:36.730064    2025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 11:22:36.735590    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:22:36.742091    2025 ssh_runner.go:195] Run: which cri-dockerd
	I0829 11:22:36.743682    2025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 11:22:36.746879    2025 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0829 11:22:36.752914    2025 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 11:22:36.847518    2025 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 11:22:36.942354    2025 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 11:22:36.942411    2025 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 11:22:36.949007    2025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:22:37.042004    2025 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 11:22:49.441765    2025 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.399860542s)
	I0829 11:22:49.441831    2025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 11:22:49.448299    2025 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0829 11:22:49.456442    2025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:22:49.463023    2025 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 11:22:49.538075    2025 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 11:22:49.609827    2025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:22:49.681273    2025 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 11:22:49.688334    2025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:22:49.694382    2025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:22:49.783679    2025 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 11:22:49.820633    2025 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 11:22:49.820723    2025 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 11:22:49.823064    2025 start.go:563] Will wait 60s for crictl version
	I0829 11:22:49.823107    2025 ssh_runner.go:195] Run: which crictl
	I0829 11:22:49.824537    2025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 11:22:49.837151    2025 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0829 11:22:49.837220    2025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:22:49.844791    2025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:22:49.859243    2025 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0829 11:22:49.859321    2025 ssh_runner.go:195] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0829 11:22:49.864145    2025 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0829 11:22:49.868163    2025 kubeadm.go:883] updating cluster {Name:functional-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.0 ClusterName:functional-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0829 11:22:49.868231    2025 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:22:49.868287    2025 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:22:49.874291    2025 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-312000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0829 11:22:49.874296    2025 docker.go:615] Images already preloaded, skipping extraction
	I0829 11:22:49.874343    2025 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:22:49.879706    2025 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-312000
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0829 11:22:49.879712    2025 cache_images.go:84] Images are preloaded, skipping loading
	I0829 11:22:49.879715    2025 kubeadm.go:934] updating node { 192.168.105.4 8441 v1.31.0 docker true true} ...
	I0829 11:22:49.879793    2025 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-312000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.105.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 11:22:49.879837    2025 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 11:22:49.896103    2025 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0829 11:22:49.896159    2025 cni.go:84] Creating CNI manager for ""
	I0829 11:22:49.896165    2025 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:22:49.896169    2025 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 11:22:49.896177    2025 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.4 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-312000 NodeName:functional-312000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.105.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 11:22:49.896238    2025 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.4
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-312000"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 11:22:49.896301    2025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0829 11:22:49.900301    2025 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 11:22:49.900331    2025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 11:22:49.903892    2025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0829 11:22:49.909872    2025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 11:22:49.915854    2025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0829 11:22:49.922044    2025 ssh_runner.go:195] Run: grep 192.168.105.4	control-plane.minikube.internal$ /etc/hosts
	I0829 11:22:49.923550    2025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:22:49.993343    2025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:22:49.998908    2025 certs.go:68] Setting up /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000 for IP: 192.168.105.4
	I0829 11:22:49.998915    2025 certs.go:194] generating shared ca certs ...
	I0829 11:22:49.998930    2025 certs.go:226] acquiring lock for ca certs: {Name:mk29df1c1b696cda1cc19a90487167bb76984cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:22:49.999086    2025 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key
	I0829 11:22:49.999130    2025 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key
	I0829 11:22:49.999138    2025 certs.go:256] generating profile certs ...
	I0829 11:22:49.999196    2025 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.key
	I0829 11:22:49.999246    2025 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/apiserver.key.65ae9de7
	I0829 11:22:49.999290    2025 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/proxy-client.key
	I0829 11:22:49.999442    2025 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418.pem (1338 bytes)
	W0829 11:22:49.999472    2025 certs.go:480] ignoring /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418_empty.pem, impossibly tiny 0 bytes
	I0829 11:22:49.999476    2025 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 11:22:49.999496    2025 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem (1082 bytes)
	I0829 11:22:49.999516    2025 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem (1123 bytes)
	I0829 11:22:49.999533    2025 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem (1675 bytes)
	I0829 11:22:49.999569    2025 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem (1708 bytes)
	I0829 11:22:49.999898    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 11:22:50.008683    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 11:22:50.016681    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 11:22:50.024726    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 11:22:50.032788    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0829 11:22:50.040860    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 11:22:50.048840    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 11:22:50.056757    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 11:22:50.064911    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem --> /usr/share/ca-certificates/14182.pem (1708 bytes)
	I0829 11:22:50.072749    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 11:22:50.081195    2025 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418.pem --> /usr/share/ca-certificates/1418.pem (1338 bytes)
	I0829 11:22:50.089424    2025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 11:22:50.095148    2025 ssh_runner.go:195] Run: openssl version
	I0829 11:22:50.097195    2025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1418.pem && ln -fs /usr/share/ca-certificates/1418.pem /etc/ssl/certs/1418.pem"
	I0829 11:22:50.100975    2025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1418.pem
	I0829 11:22:50.102435    2025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:20 /usr/share/ca-certificates/1418.pem
	I0829 11:22:50.102452    2025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1418.pem
	I0829 11:22:50.104386    2025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1418.pem /etc/ssl/certs/51391683.0"
	I0829 11:22:50.108080    2025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14182.pem && ln -fs /usr/share/ca-certificates/14182.pem /etc/ssl/certs/14182.pem"
	I0829 11:22:50.111955    2025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14182.pem
	I0829 11:22:50.113574    2025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:20 /usr/share/ca-certificates/14182.pem
	I0829 11:22:50.113590    2025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14182.pem
	I0829 11:22:50.115597    2025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14182.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 11:22:50.119065    2025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 11:22:50.122682    2025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:22:50.124342    2025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:22:50.124362    2025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:22:50.126324    2025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 11:22:50.129428    2025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 11:22:50.130870    2025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 11:22:50.132764    2025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 11:22:50.134822    2025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 11:22:50.136832    2025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 11:22:50.138754    2025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 11:22:50.140642    2025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 11:22:50.142585    2025 kubeadm.go:392] StartCluster: {Name:functional-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
1.0 ClusterName:functional-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Moun
t:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:22:50.142660    2025 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 11:22:50.148726    2025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 11:22:50.152410    2025 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 11:22:50.152413    2025 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 11:22:50.152438    2025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 11:22:50.155908    2025 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:22:50.156212    2025 kubeconfig.go:125] found "functional-312000" server: "https://192.168.105.4:8441"
	I0829 11:22:50.156829    2025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 11:22:50.160220    2025 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.105.4"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0829 11:22:50.160224    2025 kubeadm.go:1160] stopping kube-system containers ...
	I0829 11:22:50.160260    2025 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 11:22:50.167180    2025 docker.go:483] Stopping containers: [9a60c0aa5924 180e98e77f5f b61a335098d4 488c997d8b28 88426a442243 fbd545c98f01 2045959df095 d35d8d1d0f5c 6139093704f1 8d9234bb7196 05833a52685b 7c93ae567b08 a8bfa396c38c c3de4ee2fb10 5ffc980043f7 0fae92e6caf9 23d397f4ac20 1505f31da467 9ab49a821bc7 49d254f9aff4 daa1fcd45e4e 69cc14406ac3 4a8abfc5dbd1 e6fa65b6ce9c 1dbee0f40197 e26a914af049 4d1de353c2d7 7da2b0a735e3]
	I0829 11:22:50.167244    2025 ssh_runner.go:195] Run: docker stop 9a60c0aa5924 180e98e77f5f b61a335098d4 488c997d8b28 88426a442243 fbd545c98f01 2045959df095 d35d8d1d0f5c 6139093704f1 8d9234bb7196 05833a52685b 7c93ae567b08 a8bfa396c38c c3de4ee2fb10 5ffc980043f7 0fae92e6caf9 23d397f4ac20 1505f31da467 9ab49a821bc7 49d254f9aff4 daa1fcd45e4e 69cc14406ac3 4a8abfc5dbd1 e6fa65b6ce9c 1dbee0f40197 e26a914af049 4d1de353c2d7 7da2b0a735e3
	I0829 11:22:50.174502    2025 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 11:22:50.284074    2025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 11:22:50.290192    2025 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Aug 29 18:21 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5657 Aug 29 18:22 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 29 18:21 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 29 18:22 /etc/kubernetes/scheduler.conf
	
	I0829 11:22:50.290235    2025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0829 11:22:50.294424    2025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0829 11:22:50.298436    2025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0829 11:22:50.302428    2025 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:22:50.302727    2025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 11:22:50.307469    2025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0829 11:22:50.311318    2025 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:22:50.311350    2025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 11:22:50.314877    2025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 11:22:50.318043    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:22:50.335199    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:22:51.033887    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:22:51.136569    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:22:51.158080    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:22:51.182955    2025 api_server.go:52] waiting for apiserver process to appear ...
	I0829 11:22:51.183026    2025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:22:51.685388    2025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:22:52.184157    2025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:22:52.190150    2025 api_server.go:72] duration metric: took 1.007204584s to wait for apiserver process to appear ...
	I0829 11:22:52.190158    2025 api_server.go:88] waiting for apiserver healthz status ...
	I0829 11:22:52.190168    2025 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0829 11:22:54.348611    2025 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 11:22:54.348621    2025 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 11:22:54.348627    2025 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0829 11:22:54.361745    2025 api_server.go:279] https://192.168.105.4:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0829 11:22:54.361752    2025 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0829 11:22:54.692309    2025 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0829 11:22:54.702155    2025 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 11:22:54.702177    2025 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 11:22:55.192299    2025 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0829 11:22:55.205543    2025 api_server.go:279] https://192.168.105.4:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0829 11:22:55.205566    2025 api_server.go:103] status: https://192.168.105.4:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0829 11:22:55.692200    2025 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0829 11:22:55.698493    2025 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0829 11:22:55.702713    2025 api_server.go:141] control plane version: v1.31.0
	I0829 11:22:55.702721    2025 api_server.go:131] duration metric: took 3.512591458s to wait for apiserver health ...
	I0829 11:22:55.702725    2025 cni.go:84] Creating CNI manager for ""
	I0829 11:22:55.702731    2025 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:22:55.706596    2025 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 11:22:55.710496    2025 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 11:22:55.714330    2025 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 11:22:55.722348    2025 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 11:22:55.727447    2025 system_pods.go:59] 7 kube-system pods found
	I0829 11:22:55.727459    2025 system_pods.go:61] "coredns-6f6b679f8f-4wppn" [6caa35af-dfdb-4344-a077-a7f450fbb4f9] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0829 11:22:55.727462    2025 system_pods.go:61] "etcd-functional-312000" [cb7cba37-45a5-4dff-9180-be1e88d124ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0829 11:22:55.727465    2025 system_pods.go:61] "kube-apiserver-functional-312000" [2435d05e-e4b4-4231-81b5-99d038e69b76] Pending
	I0829 11:22:55.727468    2025 system_pods.go:61] "kube-controller-manager-functional-312000" [83de2a98-b459-42e7-afeb-149bfc09e71a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0829 11:22:55.727470    2025 system_pods.go:61] "kube-proxy-vgdtt" [51624d33-573a-49ac-b6cf-4af99ecdbdae] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0829 11:22:55.727472    2025 system_pods.go:61] "kube-scheduler-functional-312000" [6a6cb34b-5c51-41f0-9690-4a5dd8d107cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0829 11:22:55.727474    2025 system_pods.go:61] "storage-provisioner" [0b815560-b36d-412c-b4fa-17d80a9ac18f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0829 11:22:55.727478    2025 system_pods.go:74] duration metric: took 5.124ms to wait for pod list to return data ...
	I0829 11:22:55.727481    2025 node_conditions.go:102] verifying NodePressure condition ...
	I0829 11:22:55.729602    2025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 11:22:55.729615    2025 node_conditions.go:123] node cpu capacity is 2
	I0829 11:22:55.729620    2025 node_conditions.go:105] duration metric: took 2.136584ms to run NodePressure ...
	I0829 11:22:55.729627    2025 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:22:55.959610    2025 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0829 11:22:55.961973    2025 kubeadm.go:739] kubelet initialised
	I0829 11:22:55.961977    2025 kubeadm.go:740] duration metric: took 2.359708ms waiting for restarted kubelet to initialise ...
	I0829 11:22:55.961981    2025 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 11:22:55.964510    2025 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace to be "Ready" ...
	I0829 11:22:57.971007    2025 pod_ready.go:103] pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace has status "Ready":"False"
	I0829 11:22:59.979405    2025 pod_ready.go:103] pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace has status "Ready":"False"
	I0829 11:23:01.970844    2025 pod_ready.go:93] pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:01.970853    2025 pod_ready.go:82] duration metric: took 6.006389709s for pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:01.970859    2025 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:02.479460    2025 pod_ready.go:93] pod "etcd-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:02.479474    2025 pod_ready.go:82] duration metric: took 508.613916ms for pod "etcd-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:02.479487    2025 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:04.486177    2025 pod_ready.go:93] pod "kube-apiserver-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:04.486187    2025 pod_ready.go:82] duration metric: took 2.006711333s for pod "kube-apiserver-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:04.486195    2025 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:06.494262    2025 pod_ready.go:103] pod "kube-controller-manager-functional-312000" in "kube-system" namespace has status "Ready":"False"
	I0829 11:23:06.995820    2025 pod_ready.go:93] pod "kube-controller-manager-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:06.995835    2025 pod_ready.go:82] duration metric: took 2.509654875s for pod "kube-controller-manager-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:06.995846    2025 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-vgdtt" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.000953    2025 pod_ready.go:93] pod "kube-proxy-vgdtt" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:07.000961    2025 pod_ready.go:82] duration metric: took 5.108417ms for pod "kube-proxy-vgdtt" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.000970    2025 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.005770    2025 pod_ready.go:93] pod "kube-scheduler-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:07.005776    2025 pod_ready.go:82] duration metric: took 4.800167ms for pod "kube-scheduler-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.005783    2025 pod_ready.go:39] duration metric: took 11.043897125s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 11:23:07.005802    2025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 11:23:07.014923    2025 ops.go:34] apiserver oom_adj: -16
	I0829 11:23:07.014930    2025 kubeadm.go:597] duration metric: took 16.862664291s to restartPrimaryControlPlane
	I0829 11:23:07.014935    2025 kubeadm.go:394] duration metric: took 16.872503166s to StartCluster
	I0829 11:23:07.014950    2025 settings.go:142] acquiring lock: {Name:mk4c43097bad4576952ccc223d0a8a031914c5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:23:07.015131    2025 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:23:07.015837    2025 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/kubeconfig: {Name:mk8af293b3e18a99fbcb2b7e12f57a5251bf5686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:23:07.016305    2025 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:23:07.016329    2025 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 11:23:07.016401    2025 addons.go:69] Setting storage-provisioner=true in profile "functional-312000"
	I0829 11:23:07.016413    2025 addons.go:69] Setting default-storageclass=true in profile "functional-312000"
	I0829 11:23:07.016424    2025 addons.go:234] Setting addon storage-provisioner=true in "functional-312000"
	W0829 11:23:07.016428    2025 addons.go:243] addon storage-provisioner should already be in state true
	I0829 11:23:07.016437    2025 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-312000"
	I0829 11:23:07.016450    2025 host.go:66] Checking if "functional-312000" exists ...
	I0829 11:23:07.016495    2025 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:23:07.017984    2025 addons.go:234] Setting addon default-storageclass=true in "functional-312000"
	W0829 11:23:07.017989    2025 addons.go:243] addon default-storageclass should already be in state true
	I0829 11:23:07.017997    2025 host.go:66] Checking if "functional-312000" exists ...
	I0829 11:23:07.020362    2025 out.go:177] * Verifying Kubernetes components...
	I0829 11:23:07.020875    2025 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 11:23:07.020881    2025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 11:23:07.020892    2025 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
	I0829 11:23:07.023400    2025 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:23:07.026369    2025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:23:07.030370    2025 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:23:07.030375    2025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 11:23:07.030382    2025 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
	I0829 11:23:07.152288    2025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:23:07.157849    2025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 11:23:07.161621    2025 node_ready.go:35] waiting up to 6m0s for node "functional-312000" to be "Ready" ...
	I0829 11:23:07.163189    2025 node_ready.go:49] node "functional-312000" has status "Ready":"True"
	I0829 11:23:07.163192    2025 node_ready.go:38] duration metric: took 1.563542ms for node "functional-312000" to be "Ready" ...
	I0829 11:23:07.163195    2025 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 11:23:07.165782    2025 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.168088    2025 pod_ready.go:93] pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:07.168092    2025 pod_ready.go:82] duration metric: took 2.306625ms for pod "coredns-6f6b679f8f-4wppn" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.168095    2025 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.208171    2025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:23:07.484685    2025 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0829 11:23:07.488707    2025 addons.go:510] duration metric: took 472.393042ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0829 11:23:07.570565    2025 pod_ready.go:93] pod "etcd-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:07.570571    2025 pod_ready.go:82] duration metric: took 402.4765ms for pod "etcd-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.570575    2025 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.975726    2025 pod_ready.go:93] pod "kube-apiserver-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:07.975755    2025 pod_ready.go:82] duration metric: took 405.170792ms for pod "kube-apiserver-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:07.975777    2025 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:08.374269    2025 pod_ready.go:93] pod "kube-controller-manager-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:08.374290    2025 pod_ready.go:82] duration metric: took 398.505208ms for pod "kube-controller-manager-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:08.374310    2025 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vgdtt" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:08.774632    2025 pod_ready.go:93] pod "kube-proxy-vgdtt" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:08.774660    2025 pod_ready.go:82] duration metric: took 400.34125ms for pod "kube-proxy-vgdtt" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:08.774676    2025 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:09.175853    2025 pod_ready.go:93] pod "kube-scheduler-functional-312000" in "kube-system" namespace has status "Ready":"True"
	I0829 11:23:09.175884    2025 pod_ready.go:82] duration metric: took 401.198625ms for pod "kube-scheduler-functional-312000" in "kube-system" namespace to be "Ready" ...
	I0829 11:23:09.175907    2025 pod_ready.go:39] duration metric: took 2.012719417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0829 11:23:09.175965    2025 api_server.go:52] waiting for apiserver process to appear ...
	I0829 11:23:09.176286    2025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:23:09.197045    2025 api_server.go:72] duration metric: took 2.180739458s to wait for apiserver process to appear ...
	I0829 11:23:09.197056    2025 api_server.go:88] waiting for apiserver healthz status ...
	I0829 11:23:09.197072    2025 api_server.go:253] Checking apiserver healthz at https://192.168.105.4:8441/healthz ...
	I0829 11:23:09.204510    2025 api_server.go:279] https://192.168.105.4:8441/healthz returned 200:
	ok
	I0829 11:23:09.205512    2025 api_server.go:141] control plane version: v1.31.0
	I0829 11:23:09.205520    2025 api_server.go:131] duration metric: took 8.459292ms to wait for apiserver health ...
	I0829 11:23:09.205526    2025 system_pods.go:43] waiting for kube-system pods to appear ...
	I0829 11:23:09.380288    2025 system_pods.go:59] 7 kube-system pods found
	I0829 11:23:09.380320    2025 system_pods.go:61] "coredns-6f6b679f8f-4wppn" [6caa35af-dfdb-4344-a077-a7f450fbb4f9] Running
	I0829 11:23:09.380329    2025 system_pods.go:61] "etcd-functional-312000" [cb7cba37-45a5-4dff-9180-be1e88d124ea] Running
	I0829 11:23:09.380333    2025 system_pods.go:61] "kube-apiserver-functional-312000" [2435d05e-e4b4-4231-81b5-99d038e69b76] Running
	I0829 11:23:09.380338    2025 system_pods.go:61] "kube-controller-manager-functional-312000" [83de2a98-b459-42e7-afeb-149bfc09e71a] Running
	I0829 11:23:09.380342    2025 system_pods.go:61] "kube-proxy-vgdtt" [51624d33-573a-49ac-b6cf-4af99ecdbdae] Running
	I0829 11:23:09.380346    2025 system_pods.go:61] "kube-scheduler-functional-312000" [6a6cb34b-5c51-41f0-9690-4a5dd8d107cf] Running
	I0829 11:23:09.380350    2025 system_pods.go:61] "storage-provisioner" [0b815560-b36d-412c-b4fa-17d80a9ac18f] Running
	I0829 11:23:09.380359    2025 system_pods.go:74] duration metric: took 174.828792ms to wait for pod list to return data ...
	I0829 11:23:09.380373    2025 default_sa.go:34] waiting for default service account to be created ...
	I0829 11:23:09.575288    2025 default_sa.go:45] found service account: "default"
	I0829 11:23:09.575331    2025 default_sa.go:55] duration metric: took 194.943417ms for default service account to be created ...
	I0829 11:23:09.575353    2025 system_pods.go:116] waiting for k8s-apps to be running ...
	I0829 11:23:09.778872    2025 system_pods.go:86] 7 kube-system pods found
	I0829 11:23:09.778904    2025 system_pods.go:89] "coredns-6f6b679f8f-4wppn" [6caa35af-dfdb-4344-a077-a7f450fbb4f9] Running
	I0829 11:23:09.778912    2025 system_pods.go:89] "etcd-functional-312000" [cb7cba37-45a5-4dff-9180-be1e88d124ea] Running
	I0829 11:23:09.778916    2025 system_pods.go:89] "kube-apiserver-functional-312000" [2435d05e-e4b4-4231-81b5-99d038e69b76] Running
	I0829 11:23:09.778921    2025 system_pods.go:89] "kube-controller-manager-functional-312000" [83de2a98-b459-42e7-afeb-149bfc09e71a] Running
	I0829 11:23:09.778925    2025 system_pods.go:89] "kube-proxy-vgdtt" [51624d33-573a-49ac-b6cf-4af99ecdbdae] Running
	I0829 11:23:09.778929    2025 system_pods.go:89] "kube-scheduler-functional-312000" [6a6cb34b-5c51-41f0-9690-4a5dd8d107cf] Running
	I0829 11:23:09.778933    2025 system_pods.go:89] "storage-provisioner" [0b815560-b36d-412c-b4fa-17d80a9ac18f] Running
	I0829 11:23:09.778944    2025 system_pods.go:126] duration metric: took 203.583542ms to wait for k8s-apps to be running ...
	I0829 11:23:09.778953    2025 system_svc.go:44] waiting for kubelet service to be running ....
	I0829 11:23:09.779138    2025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 11:23:09.797618    2025 system_svc.go:56] duration metric: took 18.657042ms WaitForService to wait for kubelet
	I0829 11:23:09.797640    2025 kubeadm.go:582] duration metric: took 2.781342s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:23:09.797660    2025 node_conditions.go:102] verifying NodePressure condition ...
	I0829 11:23:09.970740    2025 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0829 11:23:09.970748    2025 node_conditions.go:123] node cpu capacity is 2
	I0829 11:23:09.970755    2025 node_conditions.go:105] duration metric: took 173.093458ms to run NodePressure ...
	I0829 11:23:09.970763    2025 start.go:241] waiting for startup goroutines ...
	I0829 11:23:09.970768    2025 start.go:246] waiting for cluster config update ...
	I0829 11:23:09.970775    2025 start.go:255] writing updated cluster config ...
	I0829 11:23:09.971176    2025 ssh_runner.go:195] Run: rm -f paused
	I0829 11:23:10.006069    2025 start.go:600] kubectl: 1.29.2, cluster: 1.31.0 (minor skew: 2)
	I0829 11:23:10.010407    2025 out.go:201] 
	W0829 11:23:10.013504    2025 out.go:270] ! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0.
	I0829 11:23:10.018413    2025 out.go:177]   - Want kubectl v1.31.0? Try 'minikube kubectl -- get pods -A'
	I0829 11:23:10.026380    2025 out.go:177] * Done! kubectl is now configured to use "functional-312000" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.154394739Z" level=warning msg="cleaning up after shim disconnected" id=96065d2277a1d91e7e52d6a6de529b5b448b9df828a254f26310bddc5894f06b namespace=moby
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.154473146Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.209931669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.209962582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.209967998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.210072027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:47 functional-312000 dockerd[5741]: time="2024-08-29T18:23:47.231182758Z" level=info msg="ignoring event" container=0650380888a9de102cd52e0446d3e56910d86dfb91e33ee9c1c3a25938ee9bf1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.231330073Z" level=info msg="shim disconnected" id=0650380888a9de102cd52e0446d3e56910d86dfb91e33ee9c1c3a25938ee9bf1 namespace=moby
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.231381775Z" level=warning msg="cleaning up after shim disconnected" id=0650380888a9de102cd52e0446d3e56910d86dfb91e33ee9c1c3a25938ee9bf1 namespace=moby
	Aug 29 18:23:47 functional-312000 dockerd[5747]: time="2024-08-29T18:23:47.231386483Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 29 18:23:48 functional-312000 dockerd[5747]: time="2024-08-29T18:23:48.475358631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 29 18:23:48 functional-312000 dockerd[5747]: time="2024-08-29T18:23:48.475394085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 29 18:23:48 functional-312000 dockerd[5747]: time="2024-08-29T18:23:48.475409583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:48 functional-312000 dockerd[5747]: time="2024-08-29T18:23:48.475444787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:48 functional-312000 cri-dockerd[5999]: time="2024-08-29T18:23:48Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bffc0a80b2962b321c0cec52a578ab00ae115f77b54f141ed13cfee1f63b82bc/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 29 18:23:49 functional-312000 cri-dockerd[5999]: time="2024-08-29T18:23:49Z" level=info msg="Stop pulling image docker.io/nginx:latest: Status: Image is up to date for nginx:latest"
	Aug 29 18:23:49 functional-312000 dockerd[5747]: time="2024-08-29T18:23:49.294002121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 29 18:23:49 functional-312000 dockerd[5747]: time="2024-08-29T18:23:49.294105609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 29 18:23:49 functional-312000 dockerd[5747]: time="2024-08-29T18:23:49.294120065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:49 functional-312000 dockerd[5747]: time="2024-08-29T18:23:49.294425945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:57 functional-312000 dockerd[5747]: time="2024-08-29T18:23:57.291801676Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 29 18:23:57 functional-312000 dockerd[5747]: time="2024-08-29T18:23:57.291834839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 29 18:23:57 functional-312000 dockerd[5747]: time="2024-08-29T18:23:57.291844587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:57 functional-312000 dockerd[5747]: time="2024-08-29T18:23:57.291880083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 29 18:23:57 functional-312000 cri-dockerd[5999]: time="2024-08-29T18:23:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/37cd522d1c52ca2950d810fa6eea5497502b9780101d9d42de9b5f2a400551a0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                           CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f4e8428f5f938       nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add   10 seconds ago       Running             myfrontend                0                   bffc0a80b2962       sp-pod
	0650380888a9d       72565bf5bbedf                                                                   12 seconds ago       Exited              echoserver-arm            2                   cd797acdb58be       hello-node-connect-65d86f57f4-zx8nc
	9863a801f6d10       72565bf5bbedf                                                                   20 seconds ago       Exited              echoserver-arm            2                   d2519695c0c24       hello-node-64b4f8f9ff-9zv77
	542c4a39a906c       nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158   34 seconds ago       Running             nginx                     0                   ff2fe09fc9b26       nginx-svc
	c927b8c14fb9c       2437cf7621777                                                                   About a minute ago   Running             coredns                   2                   aefcc78cf2669       coredns-6f6b679f8f-4wppn
	a5ea9041885aa       ba04bb24b9575                                                                   About a minute ago   Running             storage-provisioner       2                   2b88d20442d31       storage-provisioner
	4a8801e5835d8       71d55d66fd4ee                                                                   About a minute ago   Running             kube-proxy                2                   8fc7a4586a06b       kube-proxy-vgdtt
	101bc7756c05e       fbbbd428abb4d                                                                   About a minute ago   Running             kube-scheduler            2                   26ad8af137f8d       kube-scheduler-functional-312000
	01c038f2b4cb1       27e3830e14027                                                                   About a minute ago   Running             etcd                      2                   9dc04fe51b55a       etcd-functional-312000
	853230991aa84       fcb0683e6bdbd                                                                   About a minute ago   Running             kube-controller-manager   2                   d63e160077967       kube-controller-manager-functional-312000
	1086d5ec258c7       cd0f0ae0ec9e0                                                                   About a minute ago   Running             kube-apiserver            0                   59b1ddf3125ea       kube-apiserver-functional-312000
	180e98e77f5f9       2437cf7621777                                                                   About a minute ago   Exited              coredns                   1                   88426a4422439       coredns-6f6b679f8f-4wppn
	b61a335098d49       71d55d66fd4ee                                                                   About a minute ago   Exited              kube-proxy                1                   2045959df0955       kube-proxy-vgdtt
	488c997d8b281       ba04bb24b9575                                                                   About a minute ago   Exited              storage-provisioner       1                   fbd545c98f016       storage-provisioner
	d35d8d1d0f5c7       27e3830e14027                                                                   About a minute ago   Exited              etcd                      1                   a8bfa396c38c8       etcd-functional-312000
	6139093704f18       fbbbd428abb4d                                                                   About a minute ago   Exited              kube-scheduler            1                   5ffc980043f75       kube-scheduler-functional-312000
	8d9234bb71965       fcb0683e6bdbd                                                                   About a minute ago   Exited              kube-controller-manager   1                   7c93ae567b087       kube-controller-manager-functional-312000
	
	
	==> coredns [180e98e77f5f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38744 - 22516 "HINFO IN 1324102400152104475.6972045167918144933. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.009333687s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c927b8c14fb9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ea7a0d73d9d208f758b1f67640ef03c58089b9d9366cf3478df3bb369b210e39f213811b46224f8a04380814b6e0890ccd358f5b5e8c80bc22ac19c8601ee35b
	CoreDNS-1.11.1
	linux/arm64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:49373 - 44223 "HINFO IN 335119254331106436.6742033756457883145. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.009903556s
	[INFO] 10.244.0.1:1776 - 42760 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 4096" NOERROR qr,aa,rd 104 0.000109445s
	[INFO] 10.244.0.1:45513 - 30046 "AAAA IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 146 0.000102988s
	[INFO] 10.244.0.1:4596 - 4203 "SVCB IN _dns.resolver.arpa. udp 36 false 512" NXDOMAIN qr,rd,ra 116 0.001318005s
	[INFO] 10.244.0.1:9430 - 1319 "A IN nginx-svc.default.svc.cluster.local. udp 53 false 512" NOERROR qr,aa,rd 104 0.000049035s
	[INFO] 10.244.0.1:19082 - 42077 "A IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 104 0.00005766s
	[INFO] 10.244.0.1:29125 - 4074 "AAAA IN nginx-svc.default.svc.cluster.local. udp 64 false 1232" NOERROR qr,aa,rd 146 0.000089447s
	
	
	==> describe nodes <==
	Name:               functional-312000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-312000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=functional-312000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T11_21_32_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:21:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-312000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 18:23:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:23:55 +0000   Thu, 29 Aug 2024 18:21:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:23:55 +0000   Thu, 29 Aug 2024 18:21:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:23:55 +0000   Thu, 29 Aug 2024 18:21:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:23:55 +0000   Thu, 29 Aug 2024 18:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.105.4
	  Hostname:    functional-312000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             3904744Ki
	  pods:               110
	System Info:
	  Machine ID:                 82815d12401e406f985346768eea1b48
	  System UUID:                82815d12401e406f985346768eea1b48
	  Boot ID:                    61202e23-cfff-4b52-ad76-635aba0fb418
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     hello-node-64b4f8f9ff-9zv77                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  default                     hello-node-connect-65d86f57f4-zx8nc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-6f6b679f8f-4wppn                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m21s
	  kube-system                 etcd-functional-312000                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m27s
	  kube-system                 kube-apiserver-functional-312000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 kube-controller-manager-functional-312000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-vgdtt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-scheduler-functional-312000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m20s                kube-proxy       
	  Normal  Starting                 63s                  kube-proxy       
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m27s                kubelet          Node functional-312000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  2m27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    2m27s                kubelet          Node functional-312000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m27s                kubelet          Node functional-312000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m27s                kubelet          Starting kubelet.
	  Normal  NodeReady                2m24s                kubelet          Node functional-312000 status is now: NodeReady
	  Normal  RegisteredNode           2m23s                node-controller  Node functional-312000 event: Registered Node functional-312000 in Controller
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node functional-312000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node functional-312000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     117s (x7 over 117s)  kubelet          Node functional-312000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           111s                 node-controller  Node functional-312000 event: Registered Node functional-312000 in Controller
	  Normal  Starting                 68s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)    kubelet          Node functional-312000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)    kubelet          Node functional-312000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x7 over 68s)    kubelet          Node functional-312000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  68s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                  node-controller  Node functional-312000 event: Registered Node functional-312000 in Controller
	
	
	==> dmesg <==
	[  +1.013387] systemd-fstab-generator[3893]: Ignoring "noauto" option for root device
	[  +3.432883] kauditd_printk_skb: 199 callbacks suppressed
	[  +6.606893] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.459730] systemd-fstab-generator[4824]: Ignoring "noauto" option for root device
	[ +13.765462] systemd-fstab-generator[5265]: Ignoring "noauto" option for root device
	[  +0.055136] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.101427] systemd-fstab-generator[5298]: Ignoring "noauto" option for root device
	[  +0.095389] systemd-fstab-generator[5310]: Ignoring "noauto" option for root device
	[  +0.102199] systemd-fstab-generator[5325]: Ignoring "noauto" option for root device
	[  +5.122553] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.384381] systemd-fstab-generator[5948]: Ignoring "noauto" option for root device
	[  +0.073900] systemd-fstab-generator[5960]: Ignoring "noauto" option for root device
	[  +0.070984] systemd-fstab-generator[5972]: Ignoring "noauto" option for root device
	[  +0.101115] systemd-fstab-generator[5987]: Ignoring "noauto" option for root device
	[  +0.211486] systemd-fstab-generator[6160]: Ignoring "noauto" option for root device
	[  +1.137620] systemd-fstab-generator[6283]: Ignoring "noauto" option for root device
	[  +4.409727] kauditd_printk_skb: 199 callbacks suppressed
	[Aug29 18:23] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.137041] systemd-fstab-generator[7295]: Ignoring "noauto" option for root device
	[  +6.501838] kauditd_printk_skb: 28 callbacks suppressed
	[  +7.574090] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.047130] kauditd_printk_skb: 27 callbacks suppressed
	[ +12.911603] kauditd_printk_skb: 32 callbacks suppressed
	[  +7.977727] kauditd_printk_skb: 1 callbacks suppressed
	[ +10.230414] kauditd_printk_skb: 21 callbacks suppressed
	
	
	==> etcd [01c038f2b4cb] <==
	{"level":"info","ts":"2024-08-29T18:22:52.199178Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-29T18:22:52.200552Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 switched to configuration voters=(527499358918876438)"}
	{"level":"info","ts":"2024-08-29T18:22:52.200648Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-29T18:22:52.200656Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-29T18:22:52.200703Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","added-peer-id":"7520ddf439b1d16","added-peer-peer-urls":["https://192.168.105.4:2380"]}
	{"level":"info","ts":"2024-08-29T18:22:52.200756Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"80e92d98c466b02f","local-member-id":"7520ddf439b1d16","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:22:52.200773Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:22:52.200849Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"7520ddf439b1d16","initial-advertise-peer-urls":["https://192.168.105.4:2380"],"listen-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.105.4:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T18:22:52.200858Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T18:22:53.786317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-29T18:22:53.786512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-29T18:22:53.786596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-29T18:22:53.786637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 4"}
	{"level":"info","ts":"2024-08-29T18:22:53.786653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-29T18:22:53.786677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 4"}
	{"level":"info","ts":"2024-08-29T18:22:53.786700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 4"}
	{"level":"info","ts":"2024-08-29T18:22:53.789765Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:22:53.789778Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-312000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T18:22:53.790088Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:22:53.792417Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:22:53.793989Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:22:53.794049Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T18:22:53.792417Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:22:53.795214Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T18:22:53.796777Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	
	
	==> etcd [d35d8d1d0f5c] <==
	{"level":"info","ts":"2024-08-29T18:22:04.813000Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-29T18:22:04.813085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgPreVoteResp from 7520ddf439b1d16 at term 2"}
	{"level":"info","ts":"2024-08-29T18:22:04.813121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became candidate at term 3"}
	{"level":"info","ts":"2024-08-29T18:22:04.813161Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 received MsgVoteResp from 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-29T18:22:04.813191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"7520ddf439b1d16 became leader at term 3"}
	{"level":"info","ts":"2024-08-29T18:22:04.813250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 7520ddf439b1d16 elected leader 7520ddf439b1d16 at term 3"}
	{"level":"info","ts":"2024-08-29T18:22:04.818279Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"7520ddf439b1d16","local-member-attributes":"{Name:functional-312000 ClientURLs:[https://192.168.105.4:2379]}","request-path":"/0/members/7520ddf439b1d16/attributes","cluster-id":"80e92d98c466b02f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T18:22:04.818593Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:22:04.818845Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:22:04.818987Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-29T18:22:04.819148Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:22:04.821148Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:22:04.821152Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-29T18:22:04.823074Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.105.4:2379"}
	{"level":"info","ts":"2024-08-29T18:22:04.824596Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T18:22:37.058616Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-29T18:22:37.058649Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-312000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	{"level":"warn","ts":"2024-08-29T18:22:37.058689Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:22:37.058731Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:22:37.073710Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-29T18:22:37.073736Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.105.4:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-29T18:22:37.073766Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"7520ddf439b1d16","current-leader-member-id":"7520ddf439b1d16"}
	{"level":"info","ts":"2024-08-29T18:22:37.075688Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-29T18:22:37.075727Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.105.4:2380"}
	{"level":"info","ts":"2024-08-29T18:22:37.075731Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-312000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.105.4:2380"],"advertise-client-urls":["https://192.168.105.4:2379"]}
	
	
	==> kernel <==
	 18:23:59 up 2 min,  0 users,  load average: 1.25, 0.72, 0.29
	Linux functional-312000 5.10.207 #1 SMP PREEMPT Tue Aug 27 17:57:16 UTC 2024 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1086d5ec258c] <==
	I0829 18:22:54.413585       1 autoregister_controller.go:144] Starting autoregister controller
	I0829 18:22:54.413604       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0829 18:22:54.413615       1 cache.go:39] Caches are synced for autoregister controller
	I0829 18:22:54.413891       1 shared_informer.go:320] Caches are synced for configmaps
	I0829 18:22:54.414016       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0829 18:22:54.414047       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0829 18:22:54.416519       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	E0829 18:22:54.416620       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0829 18:22:54.444279       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0829 18:22:54.450368       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0829 18:22:54.450400       1 policy_source.go:224] refreshing policies
	I0829 18:22:54.466698       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 18:22:55.310819       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 18:22:55.859582       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0829 18:22:55.863459       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0829 18:22:55.875903       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0829 18:22:55.884078       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 18:22:55.886843       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 18:22:58.072423       1 controller.go:615] quota admission added evaluator for: endpoints
	I0829 18:22:58.124227       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 18:23:11.470807       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.104.118"}
	I0829 18:23:17.040815       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0829 18:23:17.085473       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.136.27"}
	I0829 18:23:21.142440       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.96.160.208"}
	I0829 18:23:31.580747       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.111.70.57"}
	
	
	==> kube-controller-manager [853230991aa8] <==
	I0829 18:22:57.919920       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 18:22:58.335416       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:22:58.384938       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:22:58.385013       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 18:23:01.928584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="7.19249ms"
	I0829 18:23:01.928832       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="30.622µs"
	I0829 18:23:17.047518       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.007375ms"
	I0829 18:23:17.054333       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="6.646127ms"
	I0829 18:23:17.060388       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="5.990918ms"
	I0829 18:23:17.060422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="14.415µs"
	I0829 18:23:22.631462       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="23.705µs"
	I0829 18:23:23.671423       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="38.912µs"
	I0829 18:23:24.677484       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="26.663µs"
	I0829 18:23:25.194951       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-312000"
	I0829 18:23:31.549030       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="8.335473ms"
	I0829 18:23:31.553331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="4.273099ms"
	I0829 18:23:31.553364       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="12.29µs"
	I0829 18:23:31.553379       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="5.832µs"
	I0829 18:23:32.784363       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="31.787µs"
	I0829 18:23:33.791119       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="24.289µs"
	I0829 18:23:39.885608       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="21.872µs"
	I0829 18:23:47.190387       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="27.58µs"
	I0829 18:23:48.066856       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-65d86f57f4" duration="33.455µs"
	I0829 18:23:52.201707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-64b4f8f9ff" duration="37.287µs"
	I0829 18:23:55.974819       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-312000"
	
	
	==> kube-controller-manager [8d9234bb7196] <==
	I0829 18:22:08.703247       1 shared_informer.go:320] Caches are synced for PV protection
	I0829 18:22:08.703909       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0829 18:22:08.706164       1 shared_informer.go:320] Caches are synced for TTL
	I0829 18:22:08.706228       1 shared_informer.go:320] Caches are synced for node
	I0829 18:22:08.706257       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I0829 18:22:08.706331       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0829 18:22:08.706361       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0829 18:22:08.706393       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0829 18:22:08.706438       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-312000"
	I0829 18:22:08.779463       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0829 18:22:08.796986       1 shared_informer.go:320] Caches are synced for attach detach
	I0829 18:22:08.803885       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0829 18:22:08.889320       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 18:22:08.905752       1 shared_informer.go:320] Caches are synced for HPA
	I0829 18:22:08.910801       1 shared_informer.go:320] Caches are synced for resource quota
	I0829 18:22:09.324340       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:22:09.403939       1 shared_informer.go:320] Caches are synced for garbage collector
	I0829 18:22:09.404017       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0829 18:22:12.419394       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="26.774768ms"
	I0829 18:22:12.420674       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="1.249189ms"
	I0829 18:22:12.429917       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="8.623752ms"
	I0829 18:22:12.429961       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="19.366µs"
	I0829 18:22:12.616870       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="3.870887ms"
	I0829 18:22:12.617474       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="15.576µs"
	I0829 18:22:36.261714       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-312000"
	
	
	==> kube-proxy [4a8801e5835d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:22:55.731741       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:22:55.747107       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0829 18:22:55.747146       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:22:55.766988       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:22:55.767010       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:22:55.767026       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:22:55.768956       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:22:55.769326       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:22:55.770198       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:22:55.771127       1 config.go:197] "Starting service config controller"
	I0829 18:22:55.771179       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:22:55.771208       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:22:55.771240       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:22:55.771461       1 config.go:326] "Starting node config controller"
	I0829 18:22:55.771484       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:22:55.871658       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:22:55.871681       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:22:55.871686       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [b61a335098d4] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0829 18:22:06.031070       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0829 18:22:06.039644       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.105.4"]
	E0829 18:22:06.039678       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0829 18:22:06.053364       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0829 18:22:06.053382       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0829 18:22:06.053411       1 server_linux.go:169] "Using iptables Proxier"
	I0829 18:22:06.054083       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0829 18:22:06.054216       1 server.go:483] "Version info" version="v1.31.0"
	I0829 18:22:06.054229       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:22:06.055144       1 config.go:197] "Starting service config controller"
	I0829 18:22:06.055158       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0829 18:22:06.055296       1 config.go:104] "Starting endpoint slice config controller"
	I0829 18:22:06.055303       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0829 18:22:06.055781       1 config.go:326] "Starting node config controller"
	I0829 18:22:06.055785       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0829 18:22:06.155252       1 shared_informer.go:320] Caches are synced for service config
	I0829 18:22:06.156156       1 shared_informer.go:320] Caches are synced for node config
	I0829 18:22:06.156162       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [101bc7756c05] <==
	I0829 18:22:52.646868       1 serving.go:386] Generated self-signed cert in-memory
	W0829 18:22:54.344610       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 18:22:54.344661       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 18:22:54.344684       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 18:22:54.344692       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 18:22:54.367432       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 18:22:54.368623       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:22:54.370932       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 18:22:54.371948       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 18:22:54.371987       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 18:22:54.372011       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 18:22:54.472721       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [6139093704f1] <==
	I0829 18:22:03.550183       1 serving.go:386] Generated self-signed cert in-memory
	W0829 18:22:05.346091       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0829 18:22:05.346537       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 18:22:05.346576       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0829 18:22:05.346595       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0829 18:22:05.372972       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0829 18:22:05.373854       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:22:05.375776       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0829 18:22:05.379430       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0829 18:22:05.379750       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0829 18:22:05.380802       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0829 18:22:05.481292       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0829 18:22:37.051085       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Aug 29 18:23:47 functional-312000 kubelet[6290]: I0829 18:23:47.283442    6290 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8shqw\" (UniqueName: \"kubernetes.io/projected/4ab799f4-e2dd-4400-9c43-7f80f2533cc1-kube-api-access-8shqw\") on node \"functional-312000\" DevicePath \"\""
	Aug 29 18:23:47 functional-312000 kubelet[6290]: I0829 18:23:47.283456    6290 reconciler_common.go:288] "Volume detached for volume \"pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483\" (UniqueName: \"kubernetes.io/host-path/4ab799f4-e2dd-4400-9c43-7f80f2533cc1-pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483\") on node \"functional-312000\" DevicePath \"\""
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.029605    6290 scope.go:117] "RemoveContainer" containerID="8e1555fd09c1ca495431b11420eab3797e45cd0df5de444374dd3684ea15065b"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.056866    6290 scope.go:117] "RemoveContainer" containerID="8e1555fd09c1ca495431b11420eab3797e45cd0df5de444374dd3684ea15065b"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: E0829 18:23:48.057363    6290 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 8e1555fd09c1ca495431b11420eab3797e45cd0df5de444374dd3684ea15065b" containerID="8e1555fd09c1ca495431b11420eab3797e45cd0df5de444374dd3684ea15065b"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.057409    6290 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"8e1555fd09c1ca495431b11420eab3797e45cd0df5de444374dd3684ea15065b"} err="failed to get container status \"8e1555fd09c1ca495431b11420eab3797e45cd0df5de444374dd3684ea15065b\": rpc error: code = Unknown desc = Error response from daemon: No such container: 8e1555fd09c1ca495431b11420eab3797e45cd0df5de444374dd3684ea15065b"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.057969    6290 scope.go:117] "RemoveContainer" containerID="c3cd26493d472b2405769ce537c1f25f1190febe2f8dfa29452626c2e2a6cff4"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.058188    6290 scope.go:117] "RemoveContainer" containerID="0650380888a9de102cd52e0446d3e56910d86dfb91e33ee9c1c3a25938ee9bf1"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: E0829 18:23:48.058323    6290 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-connect-65d86f57f4-zx8nc_default(e04f2ba0-f5ea-43c6-8d83-acd4d77ecd75)\"" pod="default/hello-node-connect-65d86f57f4-zx8nc" podUID="e04f2ba0-f5ea-43c6-8d83-acd4d77ecd75"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: E0829 18:23:48.127864    6290 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4ab799f4-e2dd-4400-9c43-7f80f2533cc1" containerName="myfrontend"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.127896    6290 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab799f4-e2dd-4400-9c43-7f80f2533cc1" containerName="myfrontend"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.290819    6290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5974n\" (UniqueName: \"kubernetes.io/projected/5213967b-8c32-40bb-8deb-86bf6db2c6f3-kube-api-access-5974n\") pod \"sp-pod\" (UID: \"5213967b-8c32-40bb-8deb-86bf6db2c6f3\") " pod="default/sp-pod"
	Aug 29 18:23:48 functional-312000 kubelet[6290]: I0829 18:23:48.290862    6290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483\" (UniqueName: \"kubernetes.io/host-path/5213967b-8c32-40bb-8deb-86bf6db2c6f3-pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483\") pod \"sp-pod\" (UID: \"5213967b-8c32-40bb-8deb-86bf6db2c6f3\") " pod="default/sp-pod"
	Aug 29 18:23:49 functional-312000 kubelet[6290]: I0829 18:23:49.191160    6290 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab799f4-e2dd-4400-9c43-7f80f2533cc1" path="/var/lib/kubelet/pods/4ab799f4-e2dd-4400-9c43-7f80f2533cc1/volumes"
	Aug 29 18:23:50 functional-312000 kubelet[6290]: I0829 18:23:50.124011    6290 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=1.400602128 podStartE2EDuration="2.123995152s" podCreationTimestamp="2024-08-29 18:23:48 +0000 UTC" firstStartedPulling="2024-08-29 18:23:48.534364692 +0000 UTC m=+57.413493078" lastFinishedPulling="2024-08-29 18:23:49.257757716 +0000 UTC m=+58.136886102" observedRunningTime="2024-08-29 18:23:50.12366515 +0000 UTC m=+59.002793536" watchObservedRunningTime="2024-08-29 18:23:50.123995152 +0000 UTC m=+59.003123538"
	Aug 29 18:23:51 functional-312000 kubelet[6290]: E0829 18:23:51.190439    6290 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 29 18:23:51 functional-312000 kubelet[6290]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 29 18:23:51 functional-312000 kubelet[6290]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 29 18:23:51 functional-312000 kubelet[6290]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 29 18:23:51 functional-312000 kubelet[6290]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 29 18:23:51 functional-312000 kubelet[6290]: I0829 18:23:51.273059    6290 scope.go:117] "RemoveContainer" containerID="05833a52685b892ad67a940f019a22947a3b1173a3e60b9dddd514345d1d025b"
	Aug 29 18:23:52 functional-312000 kubelet[6290]: I0829 18:23:52.185061    6290 scope.go:117] "RemoveContainer" containerID="9863a801f6d1087db6a78952b287d149fe28a6ab704fbadca0da063237ee3617"
	Aug 29 18:23:52 functional-312000 kubelet[6290]: E0829 18:23:52.186071    6290 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echoserver-arm\" with CrashLoopBackOff: \"back-off 20s restarting failed container=echoserver-arm pod=hello-node-64b4f8f9ff-9zv77_default(cd27663d-a0ab-41d3-87b9-2100b179a622)\"" pod="default/hello-node-64b4f8f9ff-9zv77" podUID="cd27663d-a0ab-41d3-87b9-2100b179a622"
	Aug 29 18:23:57 functional-312000 kubelet[6290]: I0829 18:23:57.077773    6290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/23abc6e1-92eb-40b1-9e2c-44bca706cc5f-test-volume\") pod \"busybox-mount\" (UID: \"23abc6e1-92eb-40b1-9e2c-44bca706cc5f\") " pod="default/busybox-mount"
	Aug 29 18:23:57 functional-312000 kubelet[6290]: I0829 18:23:57.077857    6290 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6hff\" (UniqueName: \"kubernetes.io/projected/23abc6e1-92eb-40b1-9e2c-44bca706cc5f-kube-api-access-g6hff\") pod \"busybox-mount\" (UID: \"23abc6e1-92eb-40b1-9e2c-44bca706cc5f\") " pod="default/busybox-mount"
	
	
	==> storage-provisioner [488c997d8b28] <==
	I0829 18:22:05.977415       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:22:05.987447       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:22:05.987476       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:22:06.023920       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:22:06.024070       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-312000_b3aaa8dd-3316-4bc6-bf99-5234aaac8b62!
	I0829 18:22:06.024739       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f5eb3b31-59d8-41f4-8401-65d2c004c771", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-312000_b3aaa8dd-3316-4bc6-bf99-5234aaac8b62 became leader
	I0829 18:22:06.124563       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-312000_b3aaa8dd-3316-4bc6-bf99-5234aaac8b62!
	
	
	==> storage-provisioner [a5ea9041885a] <==
	I0829 18:22:55.679964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:22:55.705961       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:22:55.707610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:23:13.120375       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:23:13.121188       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-312000_c6e33e8c-d34c-4262-98de-c61735789796!
	I0829 18:23:13.122535       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f5eb3b31-59d8-41f4-8401-65d2c004c771", APIVersion:"v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-312000_c6e33e8c-d34c-4262-98de-c61735789796 became leader
	I0829 18:23:13.222352       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-312000_c6e33e8c-d34c-4262-98de-c61735789796!
	I0829 18:23:33.766805       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0829 18:23:33.766834       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    2b4e74d9-1bd7-42c7-a476-17deb26b8503 314 0 2024-08-29 18:21:37 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-08-29 18:21:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483 &PersistentVolumeClaim{ObjectMeta:{myclaim  default  edfe4e00-3b09-4727-a4a8-d3cdbc178483 733 0 2024-08-29 18:23:33 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-08-29 18:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-08-29 18:23:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0829 18:23:33.767418       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483" provisioned
	I0829 18:23:33.767440       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0829 18:23:33.767447       1 volume_store.go:212] Trying to save persistentvolume "pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483"
	I0829 18:23:33.767880       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"edfe4e00-3b09-4727-a4a8-d3cdbc178483", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0829 18:23:33.771561       1 volume_store.go:219] persistentvolume "pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483" saved
	I0829 18:23:33.771845       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"edfe4e00-3b09-4727-a4a8-d3cdbc178483", APIVersion:"v1", ResourceVersion:"733", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-edfe4e00-3b09-4727-a4a8-d3cdbc178483
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p functional-312000 -n functional-312000
helpers_test.go:261: (dbg) Run:  kubectl --context functional-312000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-312000 describe pod busybox-mount
helpers_test.go:282: (dbg) kubectl --context functional-312000 describe pod busybox-mount:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-312000/192.168.105.4
	Start Time:       Thu, 29 Aug 2024 11:23:56 -0700
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g6hff (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-g6hff:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/busybox-mount to functional-312000
	  Normal  Pulling    3s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (28.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-arm64 -p ha-692000 node stop m02 -v=7 --alsologtostderr: (12.192440041s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr
E0829 11:29:39.063031    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:31:00.968027    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr: exit status 7 (2m55.973822708s)

                                                
                                                
-- stdout --
	ha-692000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-692000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-692000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:29:19.979975    2674 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:29:19.980138    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:29:19.980142    2674 out.go:358] Setting ErrFile to fd 2...
	I0829 11:29:19.980144    2674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:29:19.980271    2674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:29:19.980405    2674 out.go:352] Setting JSON to false
	I0829 11:29:19.980416    2674 mustload.go:65] Loading cluster: ha-692000
	I0829 11:29:19.980453    2674 notify.go:220] Checking for updates...
	I0829 11:29:19.980651    2674 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:29:19.980662    2674 status.go:255] checking status of ha-692000 ...
	I0829 11:29:19.981432    2674 status.go:330] ha-692000 host status = "Running" (err=<nil>)
	I0829 11:29:19.981444    2674 host.go:66] Checking if "ha-692000" exists ...
	I0829 11:29:19.981555    2674 host.go:66] Checking if "ha-692000" exists ...
	I0829 11:29:19.981669    2674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 11:29:19.981678    2674 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/id_rsa Username:docker}
	W0829 11:29:45.905220    2674 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0829 11:29:45.905354    2674 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0829 11:29:45.905382    2674 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0829 11:29:45.905403    2674 status.go:257] ha-692000 status: &{Name:ha-692000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 11:29:45.905432    2674 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0829 11:29:45.905441    2674 status.go:255] checking status of ha-692000-m02 ...
	I0829 11:29:45.905787    2674 status.go:330] ha-692000-m02 host status = "Stopped" (err=<nil>)
	I0829 11:29:45.905799    2674 status.go:343] host is not running, skipping remaining checks
	I0829 11:29:45.905803    2674 status.go:257] ha-692000-m02 status: &{Name:ha-692000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:29:45.905810    2674 status.go:255] checking status of ha-692000-m03 ...
	I0829 11:29:45.906800    2674 status.go:330] ha-692000-m03 host status = "Running" (err=<nil>)
	I0829 11:29:45.906810    2674 host.go:66] Checking if "ha-692000-m03" exists ...
	I0829 11:29:45.906940    2674 host.go:66] Checking if "ha-692000-m03" exists ...
	I0829 11:29:45.907066    2674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 11:29:45.907075    2674 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m03/id_rsa Username:docker}
	W0829 11:31:00.891294    2674 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0829 11:31:00.891360    2674 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0829 11:31:00.891369    2674 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0829 11:31:00.891373    2674 status.go:257] ha-692000-m03 status: &{Name:ha-692000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 11:31:00.891385    2674 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0829 11:31:00.891389    2674 status.go:255] checking status of ha-692000-m04 ...
	I0829 11:31:00.892136    2674 status.go:330] ha-692000-m04 host status = "Running" (err=<nil>)
	I0829 11:31:00.892144    2674 host.go:66] Checking if "ha-692000-m04" exists ...
	I0829 11:31:00.892252    2674 host.go:66] Checking if "ha-692000-m04" exists ...
	I0829 11:31:00.892379    2674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 11:31:00.892385    2674 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m04/id_rsa Username:docker}
	W0829 11:32:15.893193    2674 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0829 11:32:15.893245    2674 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0829 11:32:15.893254    2674 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0829 11:32:15.893259    2674 status.go:257] ha-692000-m04 status: &{Name:ha-692000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0829 11:32:15.893267    2674 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr": ha-692000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-692000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-692000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr": ha-692000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-692000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-692000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr": ha-692000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-692000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m03
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-692000-m04
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 3 (25.954446209s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 11:32:41.847743    3027 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0829 11:32:41.847758    3027 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (214.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0829 11:33:17.080172    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:33:32.559756    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:33:44.807068    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m18.800205458s)
ha_test.go:413: expected profile "ha-692000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-692000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-692000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-692000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\
":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docke
r\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 3 (25.985396458s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 11:34:26.629732    3063 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0829 11:34:26.629763    3063 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (104.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (208.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-692000 node start m02 -v=7 --alsologtostderr: exit status 80 (5.123580625s)

                                                
                                                
-- stdout --
	* Starting "ha-692000-m02" control-plane node in "ha-692000" cluster
	* Restarting existing qemu2 VM for "ha-692000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-692000-m02" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:34:26.690093    3070 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:34:26.690404    3070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:34:26.690410    3070 out.go:358] Setting ErrFile to fd 2...
	I0829 11:34:26.690413    3070 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:34:26.690560    3070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:34:26.690867    3070 mustload.go:65] Loading cluster: ha-692000
	I0829 11:34:26.691164    3070 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0829 11:34:26.691482    3070 host.go:58] "ha-692000-m02" host status: Stopped
	I0829 11:34:26.695049    3070 out.go:177] * Starting "ha-692000-m02" control-plane node in "ha-692000" cluster
	I0829 11:34:26.699838    3070 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:34:26.699859    3070 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:34:26.699867    3070 cache.go:56] Caching tarball of preloaded images
	I0829 11:34:26.699959    3070 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:34:26.699966    3070 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:34:26.700029    3070 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/ha-692000/config.json ...
	I0829 11:34:26.700444    3070 start.go:360] acquireMachinesLock for ha-692000-m02: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:34:26.700497    3070 start.go:364] duration metric: took 34.708µs to acquireMachinesLock for "ha-692000-m02"
	I0829 11:34:26.700506    3070 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:34:26.700512    3070 fix.go:54] fixHost starting: m02
	I0829 11:34:26.700659    3070 fix.go:112] recreateIfNeeded on ha-692000-m02: state=Stopped err=<nil>
	W0829 11:34:26.700665    3070 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:34:26.704008    3070 out.go:177] * Restarting existing qemu2 VM for "ha-692000-m02" ...
	I0829 11:34:26.707853    3070 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:34:26.707905    3070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:70:88:5c:14:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/disk.qcow2
	I0829 11:34:26.710660    3070 main.go:141] libmachine: STDOUT: 
	I0829 11:34:26.710683    3070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:34:26.710714    3070 fix.go:56] duration metric: took 10.201375ms for fixHost
	I0829 11:34:26.710718    3070 start.go:83] releasing machines lock for "ha-692000-m02", held for 10.21625ms
	W0829 11:34:26.710726    3070 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:34:26.710766    3070 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:34:26.710769    3070 start.go:729] Will try again in 5 seconds ...
	I0829 11:34:31.712855    3070 start.go:360] acquireMachinesLock for ha-692000-m02: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:34:31.713583    3070 start.go:364] duration metric: took 570.541µs to acquireMachinesLock for "ha-692000-m02"
	I0829 11:34:31.713854    3070 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:34:31.713878    3070 fix.go:54] fixHost starting: m02
	I0829 11:34:31.714693    3070 fix.go:112] recreateIfNeeded on ha-692000-m02: state=Stopped err=<nil>
	W0829 11:34:31.714717    3070 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:34:31.719638    3070 out.go:177] * Restarting existing qemu2 VM for "ha-692000-m02" ...
	I0829 11:34:31.723459    3070 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:34:31.723664    3070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:70:88:5c:14:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/disk.qcow2
	I0829 11:34:31.731512    3070 main.go:141] libmachine: STDOUT: 
	I0829 11:34:31.731577    3070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:34:31.731669    3070 fix.go:56] duration metric: took 17.795625ms for fixHost
	I0829 11:34:31.731687    3070 start.go:83] releasing machines lock for "ha-692000-m02", held for 18.012709ms
	W0829 11:34:31.731857    3070 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:34:31.736622    3070 out.go:201] 
	W0829 11:34:31.740716    3070 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:34:31.740735    3070 out.go:270] * 
	* 
	W0829 11:34:31.746547    3070 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:34:31.750556    3070 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:422: I0829 11:34:26.690093    3070 out.go:345] Setting OutFile to fd 1 ...
I0829 11:34:26.690404    3070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:34:26.690410    3070 out.go:358] Setting ErrFile to fd 2...
I0829 11:34:26.690413    3070 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:34:26.690560    3070 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
I0829 11:34:26.690867    3070 mustload.go:65] Loading cluster: ha-692000
I0829 11:34:26.691164    3070 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
W0829 11:34:26.691482    3070 host.go:58] "ha-692000-m02" host status: Stopped
I0829 11:34:26.695049    3070 out.go:177] * Starting "ha-692000-m02" control-plane node in "ha-692000" cluster
I0829 11:34:26.699838    3070 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
I0829 11:34:26.699859    3070 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
I0829 11:34:26.699867    3070 cache.go:56] Caching tarball of preloaded images
I0829 11:34:26.699959    3070 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0829 11:34:26.699966    3070 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
I0829 11:34:26.700029    3070 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/ha-692000/config.json ...
I0829 11:34:26.700444    3070 start.go:360] acquireMachinesLock for ha-692000-m02: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0829 11:34:26.700497    3070 start.go:364] duration metric: took 34.708µs to acquireMachinesLock for "ha-692000-m02"
I0829 11:34:26.700506    3070 start.go:96] Skipping create...Using existing machine configuration
I0829 11:34:26.700512    3070 fix.go:54] fixHost starting: m02
I0829 11:34:26.700659    3070 fix.go:112] recreateIfNeeded on ha-692000-m02: state=Stopped err=<nil>
W0829 11:34:26.700665    3070 fix.go:138] unexpected machine state, will restart: <nil>
I0829 11:34:26.704008    3070 out.go:177] * Restarting existing qemu2 VM for "ha-692000-m02" ...
I0829 11:34:26.707853    3070 qemu.go:418] Using hvf for hardware acceleration
I0829 11:34:26.707905    3070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:70:88:5c:14:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/disk.qcow2
I0829 11:34:26.710660    3070 main.go:141] libmachine: STDOUT: 
I0829 11:34:26.710683    3070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0829 11:34:26.710714    3070 fix.go:56] duration metric: took 10.201375ms for fixHost
I0829 11:34:26.710718    3070 start.go:83] releasing machines lock for "ha-692000-m02", held for 10.21625ms
W0829 11:34:26.710726    3070 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0829 11:34:26.710766    3070 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0829 11:34:26.710769    3070 start.go:729] Will try again in 5 seconds ...
I0829 11:34:31.712855    3070 start.go:360] acquireMachinesLock for ha-692000-m02: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0829 11:34:31.713583    3070 start.go:364] duration metric: took 570.541µs to acquireMachinesLock for "ha-692000-m02"
I0829 11:34:31.713854    3070 start.go:96] Skipping create...Using existing machine configuration
I0829 11:34:31.713878    3070 fix.go:54] fixHost starting: m02
I0829 11:34:31.714693    3070 fix.go:112] recreateIfNeeded on ha-692000-m02: state=Stopped err=<nil>
W0829 11:34:31.714717    3070 fix.go:138] unexpected machine state, will restart: <nil>
I0829 11:34:31.719638    3070 out.go:177] * Restarting existing qemu2 VM for "ha-692000-m02" ...
I0829 11:34:31.723459    3070 qemu.go:418] Using hvf for hardware acceleration
I0829 11:34:31.723664    3070 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:70:88:5c:14:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m02/disk.qcow2
I0829 11:34:31.731512    3070 main.go:141] libmachine: STDOUT: 
I0829 11:34:31.731577    3070 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0829 11:34:31.731669    3070 fix.go:56] duration metric: took 17.795625ms for fixHost
I0829 11:34:31.731687    3070 start.go:83] releasing machines lock for "ha-692000-m02", held for 18.012709ms
W0829 11:34:31.731857    3070 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0829 11:34:31.736622    3070 out.go:201] 
W0829 11:34:31.740716    3070 out.go:270] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0829 11:34:31.740735    3070 out.go:270] * 
* 
W0829 11:34:31.746547    3070 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0829 11:34:31.750556    3070 out.go:201] 

                                                
                                                
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-692000 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr: exit status 7 (2m57.852152125s)

                                                
                                                
-- stdout --
	ha-692000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-692000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m03
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-692000-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:34:31.810319    3074 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:34:31.810742    3074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:34:31.810748    3074 out.go:358] Setting ErrFile to fd 2...
	I0829 11:34:31.810751    3074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:34:31.810961    3074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:34:31.811142    3074 out.go:352] Setting JSON to false
	I0829 11:34:31.811154    3074 mustload.go:65] Loading cluster: ha-692000
	I0829 11:34:31.811330    3074 notify.go:220] Checking for updates...
	I0829 11:34:31.811695    3074 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:34:31.811707    3074 status.go:255] checking status of ha-692000 ...
	I0829 11:34:31.812544    3074 status.go:330] ha-692000 host status = "Running" (err=<nil>)
	I0829 11:34:31.812557    3074 host.go:66] Checking if "ha-692000" exists ...
	I0829 11:34:31.812676    3074 host.go:66] Checking if "ha-692000" exists ...
	I0829 11:34:31.812808    3074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 11:34:31.812817    3074 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/id_rsa Username:docker}
	W0829 11:34:31.813020    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0829 11:34:31.813040    3074 retry.go:31] will retry after 167.350217ms: dial tcp 192.168.105.5:22: connect: host is down
	W0829 11:34:31.980645    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0829 11:34:31.980686    3074 retry.go:31] will retry after 485.698779ms: dial tcp 192.168.105.5:22: connect: host is down
	W0829 11:34:32.468539    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0829 11:34:32.468559    3074 retry.go:31] will retry after 767.804906ms: dial tcp 192.168.105.5:22: connect: host is down
	W0829 11:34:33.238512    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0829 11:34:33.238565    3074 retry.go:31] will retry after 129.102575ms: new client: new client: dial tcp 192.168.105.5:22: connect: host is down
	I0829 11:34:33.369734    3074 sshutil.go:53] new ssh client: &{IP:192.168.105.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/id_rsa Username:docker}
	W0829 11:34:33.369994    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: host is down
	I0829 11:34:33.370007    3074 retry.go:31] will retry after 307.929187ms: dial tcp 192.168.105.5:22: connect: host is down
	W0829 11:34:59.598310    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.5:22: connect: operation timed out
	W0829 11:34:59.598368    3074 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0829 11:34:59.598377    3074 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0829 11:34:59.598381    3074 status.go:257] ha-692000 status: &{Name:ha-692000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 11:34:59.598392    3074 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	I0829 11:34:59.598396    3074 status.go:255] checking status of ha-692000-m02 ...
	I0829 11:34:59.598630    3074 status.go:330] ha-692000-m02 host status = "Stopped" (err=<nil>)
	I0829 11:34:59.598637    3074 status.go:343] host is not running, skipping remaining checks
	I0829 11:34:59.598639    3074 status.go:257] ha-692000-m02 status: &{Name:ha-692000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:34:59.598644    3074 status.go:255] checking status of ha-692000-m03 ...
	I0829 11:34:59.599363    3074 status.go:330] ha-692000-m03 host status = "Running" (err=<nil>)
	I0829 11:34:59.599372    3074 host.go:66] Checking if "ha-692000-m03" exists ...
	I0829 11:34:59.599498    3074 host.go:66] Checking if "ha-692000-m03" exists ...
	I0829 11:34:59.599628    3074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 11:34:59.599635    3074 sshutil.go:53] new ssh client: &{IP:192.168.105.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m03/id_rsa Username:docker}
	W0829 11:36:14.601328    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.7:22: connect: operation timed out
	W0829 11:36:14.601370    3074 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	E0829 11:36:14.601378    3074 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0829 11:36:14.601382    3074 status.go:257] ha-692000-m03 status: &{Name:ha-692000-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0829 11:36:14.601392    3074 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.7:22: connect: operation timed out
	I0829 11:36:14.601395    3074 status.go:255] checking status of ha-692000-m04 ...
	I0829 11:36:14.602397    3074 status.go:330] ha-692000-m04 host status = "Running" (err=<nil>)
	I0829 11:36:14.602404    3074 host.go:66] Checking if "ha-692000-m04" exists ...
	I0829 11:36:14.602493    3074 host.go:66] Checking if "ha-692000-m04" exists ...
	I0829 11:36:14.602605    3074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0829 11:36:14.602612    3074 sshutil.go:53] new ssh client: &{IP:192.168.105.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000-m04/id_rsa Username:docker}
	W0829 11:37:29.602787    3074 sshutil.go:64] dial failure (will retry): dial tcp 192.168.105.8:22: connect: operation timed out
	W0829 11:37:29.602969    3074 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	E0829 11:37:29.603013    3074 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out
	I0829 11:37:29.603036    3074 status.go:257] ha-692000-m04 status: &{Name:ha-692000-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0829 11:37:29.603091    3074 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.105.8:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 3 (25.992020833s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0829 11:37:55.595425    3115 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out
	E0829 11:37:55.595530    3115 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.105.5:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (208.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-692000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-692000 -v=7 --alsologtostderr
E0829 11:39:55.644304    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:43:17.071908    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:43:32.551380    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-692000 -v=7 --alsologtostderr: (4m38.087878084s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-692000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-692000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.222080167s)

                                                
                                                
-- stdout --
	* [ha-692000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-692000" primary control-plane node in "ha-692000" cluster
	* Restarting existing qemu2 VM for "ha-692000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-692000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:43:53.857866    3235 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:43:53.858059    3235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:43:53.858064    3235 out.go:358] Setting ErrFile to fd 2...
	I0829 11:43:53.858067    3235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:43:53.858246    3235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:43:53.859615    3235 out.go:352] Setting JSON to false
	I0829 11:43:53.879492    3235 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2597,"bootTime":1724954436,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:43:53.879566    3235 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:43:53.883641    3235 out.go:177] * [ha-692000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:43:53.890565    3235 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:43:53.890632    3235 notify.go:220] Checking for updates...
	I0829 11:43:53.898479    3235 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:43:53.902557    3235 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:43:53.905501    3235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:43:53.908540    3235 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:43:53.911569    3235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:43:53.913274    3235 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:43:53.913331    3235 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:43:53.917497    3235 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:43:53.924388    3235 start.go:297] selected driver: qemu2
	I0829 11:43:53.924394    3235 start.go:901] validating driver "qemu2" against &{Name:ha-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-692000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:
false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:43:53.924463    3235 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:43:53.927423    3235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:43:53.927473    3235 cni.go:84] Creating CNI manager for ""
	I0829 11:43:53.927479    3235 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 11:43:53.927526    3235 start.go:340] cluster config:
	{Name:ha-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-692000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:43:53.932096    3235 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:43:53.940571    3235 out.go:177] * Starting "ha-692000" primary control-plane node in "ha-692000" cluster
	I0829 11:43:53.944575    3235 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:43:53.944595    3235 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:43:53.944605    3235 cache.go:56] Caching tarball of preloaded images
	I0829 11:43:53.944675    3235 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:43:53.944682    3235 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:43:53.944770    3235 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/ha-692000/config.json ...
	I0829 11:43:53.945291    3235 start.go:360] acquireMachinesLock for ha-692000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:43:53.945331    3235 start.go:364] duration metric: took 33.667µs to acquireMachinesLock for "ha-692000"
	I0829 11:43:53.945341    3235 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:43:53.945346    3235 fix.go:54] fixHost starting: 
	I0829 11:43:53.945478    3235 fix.go:112] recreateIfNeeded on ha-692000: state=Stopped err=<nil>
	W0829 11:43:53.945487    3235 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:43:53.949593    3235 out.go:177] * Restarting existing qemu2 VM for "ha-692000" ...
	I0829 11:43:53.957527    3235 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:43:53.957573    3235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5f:87:92:d7:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/disk.qcow2
	I0829 11:43:53.959633    3235 main.go:141] libmachine: STDOUT: 
	I0829 11:43:53.959658    3235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:43:53.959699    3235 fix.go:56] duration metric: took 14.351209ms for fixHost
	I0829 11:43:53.959704    3235 start.go:83] releasing machines lock for "ha-692000", held for 14.368041ms
	W0829 11:43:53.959712    3235 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:43:53.959747    3235 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:43:53.959753    3235 start.go:729] Will try again in 5 seconds ...
	I0829 11:43:58.961935    3235 start.go:360] acquireMachinesLock for ha-692000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:43:58.962340    3235 start.go:364] duration metric: took 290.708µs to acquireMachinesLock for "ha-692000"
	I0829 11:43:58.962467    3235 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:43:58.962485    3235 fix.go:54] fixHost starting: 
	I0829 11:43:58.963142    3235 fix.go:112] recreateIfNeeded on ha-692000: state=Stopped err=<nil>
	W0829 11:43:58.963169    3235 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:43:58.971595    3235 out.go:177] * Restarting existing qemu2 VM for "ha-692000" ...
	I0829 11:43:58.974586    3235 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:43:58.974861    3235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5f:87:92:d7:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/disk.qcow2
	I0829 11:43:58.983413    3235 main.go:141] libmachine: STDOUT: 
	I0829 11:43:58.983469    3235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:43:58.983536    3235 fix.go:56] duration metric: took 21.051542ms for fixHost
	I0829 11:43:58.983551    3235 start.go:83] releasing machines lock for "ha-692000", held for 21.18675ms
	W0829 11:43:58.983736    3235 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:43:58.990592    3235 out.go:201] 
	W0829 11:43:58.994659    3235 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:43:58.994682    3235 out.go:270] * 
	* 
	W0829 11:43:58.997067    3235 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:43:59.004581    3235 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-692000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-692000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 7 (32.509916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (283.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-692000 node delete m03 -v=7 --alsologtostderr: exit status 83 (39.982292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-692000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-692000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:43:59.145107    3247 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:43:59.145347    3247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:43:59.145350    3247 out.go:358] Setting ErrFile to fd 2...
	I0829 11:43:59.145352    3247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:43:59.145476    3247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:43:59.145707    3247 mustload.go:65] Loading cluster: ha-692000
	I0829 11:43:59.145909    3247 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0829 11:43:59.146205    3247 out.go:270] ! The control-plane node ha-692000 host is not running (will try others): state=Stopped
	! The control-plane node ha-692000 host is not running (will try others): state=Stopped
	W0829 11:43:59.146320    3247 out.go:270] ! The control-plane node ha-692000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-692000-m02 host is not running (will try others): state=Stopped
	I0829 11:43:59.150366    3247 out.go:177] * The control-plane node ha-692000-m03 host is not running: state=Stopped
	I0829 11:43:59.153397    3247 out.go:177]   To start a cluster, run: "minikube start -p ha-692000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-692000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr: exit status 7 (29.516541ms)

                                                
                                                
-- stdout --
	ha-692000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:43:59.184832    3249 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:43:59.184961    3249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:43:59.184964    3249 out.go:358] Setting ErrFile to fd 2...
	I0829 11:43:59.184967    3249 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:43:59.185089    3249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:43:59.185201    3249 out.go:352] Setting JSON to false
	I0829 11:43:59.185211    3249 mustload.go:65] Loading cluster: ha-692000
	I0829 11:43:59.185273    3249 notify.go:220] Checking for updates...
	I0829 11:43:59.185429    3249 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:43:59.185437    3249 status.go:255] checking status of ha-692000 ...
	I0829 11:43:59.185653    3249 status.go:330] ha-692000 host status = "Stopped" (err=<nil>)
	I0829 11:43:59.185657    3249 status.go:343] host is not running, skipping remaining checks
	I0829 11:43:59.185659    3249 status.go:257] ha-692000 status: &{Name:ha-692000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:43:59.185669    3249 status.go:255] checking status of ha-692000-m02 ...
	I0829 11:43:59.185756    3249 status.go:330] ha-692000-m02 host status = "Stopped" (err=<nil>)
	I0829 11:43:59.185759    3249 status.go:343] host is not running, skipping remaining checks
	I0829 11:43:59.185761    3249 status.go:257] ha-692000-m02 status: &{Name:ha-692000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:43:59.185765    3249 status.go:255] checking status of ha-692000-m03 ...
	I0829 11:43:59.185856    3249 status.go:330] ha-692000-m03 host status = "Stopped" (err=<nil>)
	I0829 11:43:59.185858    3249 status.go:343] host is not running, skipping remaining checks
	I0829 11:43:59.185860    3249 status.go:257] ha-692000-m03 status: &{Name:ha-692000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:43:59.185864    3249 status.go:255] checking status of ha-692000-m04 ...
	I0829 11:43:59.185955    3249 status.go:330] ha-692000-m04 host status = "Stopped" (err=<nil>)
	I0829 11:43:59.185958    3249 status.go:343] host is not running, skipping remaining checks
	I0829 11:43:59.185960    3249 status.go:257] ha-692000-m04 status: &{Name:ha-692000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 7 (29.9195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1.005799083s)
ha_test.go:413: expected profile "ha-692000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-692000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-692000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-692000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 7 (59.437584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (251.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 stop -v=7 --alsologtostderr
E0829 11:44:40.161544    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-692000 stop -v=7 --alsologtostderr: (4m11.056959209s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr: exit status 7 (69.527959ms)

                                                
                                                
-- stdout --
	ha-692000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-692000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:48:11.403298    3349 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:48:11.403512    3349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:48:11.403516    3349 out.go:358] Setting ErrFile to fd 2...
	I0829 11:48:11.403519    3349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:48:11.403680    3349 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:48:11.403832    3349 out.go:352] Setting JSON to false
	I0829 11:48:11.403845    3349 mustload.go:65] Loading cluster: ha-692000
	I0829 11:48:11.403884    3349 notify.go:220] Checking for updates...
	I0829 11:48:11.404139    3349 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:48:11.404152    3349 status.go:255] checking status of ha-692000 ...
	I0829 11:48:11.404462    3349 status.go:330] ha-692000 host status = "Stopped" (err=<nil>)
	I0829 11:48:11.404467    3349 status.go:343] host is not running, skipping remaining checks
	I0829 11:48:11.404470    3349 status.go:257] ha-692000 status: &{Name:ha-692000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:48:11.404483    3349 status.go:255] checking status of ha-692000-m02 ...
	I0829 11:48:11.404618    3349 status.go:330] ha-692000-m02 host status = "Stopped" (err=<nil>)
	I0829 11:48:11.404623    3349 status.go:343] host is not running, skipping remaining checks
	I0829 11:48:11.404626    3349 status.go:257] ha-692000-m02 status: &{Name:ha-692000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:48:11.404632    3349 status.go:255] checking status of ha-692000-m03 ...
	I0829 11:48:11.404760    3349 status.go:330] ha-692000-m03 host status = "Stopped" (err=<nil>)
	I0829 11:48:11.404766    3349 status.go:343] host is not running, skipping remaining checks
	I0829 11:48:11.404769    3349 status.go:257] ha-692000-m03 status: &{Name:ha-692000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0829 11:48:11.404773    3349 status.go:255] checking status of ha-692000-m04 ...
	I0829 11:48:11.404906    3349 status.go:330] ha-692000-m04 host status = "Stopped" (err=<nil>)
	I0829 11:48:11.404910    3349 status.go:343] host is not running, skipping remaining checks
	I0829 11:48:11.404912    3349 status.go:257] ha-692000-m04 status: &{Name:ha-692000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr": ha-692000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr": ha-692000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr": ha-692000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-692000-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 7 (32.753208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (251.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-692000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-692000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.180278583s)

                                                
                                                
-- stdout --
	* [ha-692000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-692000" primary control-plane node in "ha-692000" cluster
	* Restarting existing qemu2 VM for "ha-692000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-692000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:48:11.466830    3353 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:48:11.466954    3353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:48:11.466957    3353 out.go:358] Setting ErrFile to fd 2...
	I0829 11:48:11.466959    3353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:48:11.467107    3353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:48:11.468172    3353 out.go:352] Setting JSON to false
	I0829 11:48:11.484356    3353 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2855,"bootTime":1724954436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:48:11.484420    3353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:48:11.489997    3353 out.go:177] * [ha-692000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:48:11.497002    3353 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:48:11.497039    3353 notify.go:220] Checking for updates...
	I0829 11:48:11.504964    3353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:48:11.507959    3353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:48:11.511973    3353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:48:11.515017    3353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:48:11.517930    3353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:48:11.521301    3353 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:48:11.521599    3353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:48:11.526001    3353 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:48:11.532972    3353 start.go:297] selected driver: qemu2
	I0829 11:48:11.532980    3353 start.go:901] validating driver "qemu2" against &{Name:ha-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0 ClusterName:ha-692000 Namespace:default APIServerHAVIP:192.168.105.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storage
class:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:48:11.533100    3353 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:48:11.535541    3353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:48:11.535594    3353 cni.go:84] Creating CNI manager for ""
	I0829 11:48:11.535600    3353 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0829 11:48:11.535659    3353 start.go:340] cluster config:
	{Name:ha-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:ha-692000 Namespace:default APIServerHAVIP:192.168.1
05.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.5 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.105.6 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.105.7 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.168.105.8 Port:0 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false
helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:48:11.539373    3353 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:48:11.547956    3353 out.go:177] * Starting "ha-692000" primary control-plane node in "ha-692000" cluster
	I0829 11:48:11.551931    3353 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:48:11.551949    3353 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:48:11.551964    3353 cache.go:56] Caching tarball of preloaded images
	I0829 11:48:11.552020    3353 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:48:11.552025    3353 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:48:11.552099    3353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/ha-692000/config.json ...
	I0829 11:48:11.552573    3353 start.go:360] acquireMachinesLock for ha-692000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:48:11.552610    3353 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "ha-692000"
	I0829 11:48:11.552618    3353 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:48:11.552625    3353 fix.go:54] fixHost starting: 
	I0829 11:48:11.552749    3353 fix.go:112] recreateIfNeeded on ha-692000: state=Stopped err=<nil>
	W0829 11:48:11.552757    3353 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:48:11.555973    3353 out.go:177] * Restarting existing qemu2 VM for "ha-692000" ...
	I0829 11:48:11.563858    3353 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:48:11.563900    3353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5f:87:92:d7:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/disk.qcow2
	I0829 11:48:11.566086    3353 main.go:141] libmachine: STDOUT: 
	I0829 11:48:11.566106    3353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:48:11.566134    3353 fix.go:56] duration metric: took 13.510666ms for fixHost
	I0829 11:48:11.566138    3353 start.go:83] releasing machines lock for "ha-692000", held for 13.524042ms
	W0829 11:48:11.566145    3353 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:48:11.566176    3353 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:48:11.566181    3353 start.go:729] Will try again in 5 seconds ...
	I0829 11:48:16.567450    3353 start.go:360] acquireMachinesLock for ha-692000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:48:16.567922    3353 start.go:364] duration metric: took 364.458µs to acquireMachinesLock for "ha-692000"
	I0829 11:48:16.568064    3353 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:48:16.568081    3353 fix.go:54] fixHost starting: 
	I0829 11:48:16.568755    3353 fix.go:112] recreateIfNeeded on ha-692000: state=Stopped err=<nil>
	W0829 11:48:16.568778    3353 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:48:16.571056    3353 out.go:177] * Restarting existing qemu2 VM for "ha-692000" ...
	I0829 11:48:16.579336    3353 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:48:16.579534    3353 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:5f:87:92:d7:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/ha-692000/disk.qcow2
	I0829 11:48:16.587474    3353 main.go:141] libmachine: STDOUT: 
	I0829 11:48:16.587538    3353 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:48:16.587630    3353 fix.go:56] duration metric: took 19.552125ms for fixHost
	I0829 11:48:16.587646    3353 start.go:83] releasing machines lock for "ha-692000", held for 19.702ms
	W0829 11:48:16.587813    3353 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-692000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:48:16.595210    3353 out.go:201] 
	W0829 11:48:16.599206    3353 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:48:16.599237    3353 out.go:270] * 
	* 
	W0829 11:48:16.600859    3353 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:48:16.608140    3353 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-692000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 7 (61.652125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-692000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-692000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-692000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"ha-692000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.105.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.105.5\",\"Port\":8443,\"K
ubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.105.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.105.7\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.105.8\",\"Port\":0,\"KubernetesVersion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kub
evirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\
"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 7 (29.239042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-692000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-692000 --control-plane -v=7 --alsologtostderr: exit status 83 (41.36175ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-692000-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-692000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:48:16.788053    3369 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:48:16.788212    3369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:48:16.788215    3369 out.go:358] Setting ErrFile to fd 2...
	I0829 11:48:16.788217    3369 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:48:16.788336    3369 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:48:16.788582    3369 mustload.go:65] Loading cluster: ha-692000
	I0829 11:48:16.788788    3369 config.go:182] Loaded profile config "ha-692000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	W0829 11:48:16.789096    3369 out.go:270] ! The control-plane node ha-692000 host is not running (will try others): state=Stopped
	! The control-plane node ha-692000 host is not running (will try others): state=Stopped
	W0829 11:48:16.789222    3369 out.go:270] ! The control-plane node ha-692000-m02 host is not running (will try others): state=Stopped
	! The control-plane node ha-692000-m02 host is not running (will try others): state=Stopped
	I0829 11:48:16.792414    3369 out.go:177] * The control-plane node ha-692000-m03 host is not running: state=Stopped
	I0829 11:48:16.796383    3369 out.go:177]   To start a cluster, run: "minikube start -p ha-692000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-692000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-692000 -n ha-692000: exit status 7 (30.047083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-692000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.07s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-192000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-192000 --driver=qemu2 : exit status 80 (9.867055s)

                                                
                                                
-- stdout --
	* [image-192000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-192000" primary control-plane node in "image-192000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-192000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-192000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-192000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-192000 -n image-192000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-192000 -n image-192000: exit status 7 (69.874167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-192000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-526000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
E0829 11:48:32.547299    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-526000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.81193925s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"be670617-6529-4922-9af6-8d1cb2957d83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-526000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"42485a51-3dea-4de4-b927-0a3e3021df47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"a131a2a4-a31b-41a1-936b-8e9d1ad0c121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig"}}
	{"specversion":"1.0","id":"418b0252-eab7-43a2-98d5-b90343096f3c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"63b89afe-8da2-42d2-8617-a99724b1383e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b24bb176-4a14-44b0-a292-9e7bfb0293a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube"}}
	{"specversion":"1.0","id":"2802d38f-e145-4d1d-b125-883dcdb20001","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"05f9496a-51cc-4f49-8c85-c161e78b03b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8fac5a67-9182-46d5-ac77-01624987de2a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"cc3066b4-2e5d-41a3-9059-9c0943e4e94b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-526000\" primary control-plane node in \"json-output-526000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a465d0fb-0889-4b1c-8168-4603fc1fb206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"441e3e4f-675d-4266-a0af-02e5d5acae50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-526000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5bbcce24-4e35-43bd-99e7-be1bd246743f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"b3438cfb-61e4-4b60-9e5a-cd615d1b8f89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"2748a852-044f-433b-9ff5-2936181f24c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-526000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"410e6506-b6d8-4837-ac26-38d94d69c225","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"04d25d5a-d43f-47ba-a689-18049aa1d3e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-526000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-526000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-526000 --output=json --user=testUser: exit status 83 (76.856917ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"15c7223e-3321-486d-9b9f-5d683214f84a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-526000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"3e1143f7-6d74-4744-aa06-7f5608c87e01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-526000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-526000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.04s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-526000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-526000 --output=json --user=testUser: exit status 83 (44.372083ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-526000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-526000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-526000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-526000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (10.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-554000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-554000 --driver=qemu2 : exit status 80 (9.922582917s)

                                                
                                                
-- stdout --
	* [first-554000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-554000" primary control-plane node in "first-554000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-554000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-554000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-554000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-29 11:48:51.026562 -0700 PDT m=+2655.970493751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-556000 -n second-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-556000 -n second-556000: exit status 85 (88.072834ms)

                                                
                                                
-- stdout --
	* Profile "second-556000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-556000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-556000" host is not running, skipping log retrieval (state="* Profile \"second-556000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-556000\"")
helpers_test.go:175: Cleaning up "second-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-556000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-08-29 11:48:51.219526 -0700 PDT m=+2656.163460876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-554000 -n first-554000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-554000 -n first-554000: exit status 7 (29.843292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-554000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-554000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-554000
--- FAIL: TestMinikubeProfile (10.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-636000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-636000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.960774625s)

                                                
                                                
-- stdout --
	* [mount-start-1-636000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-636000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-636000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-636000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-636000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-636000 -n mount-start-1-636000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-636000 -n mount-start-1-636000: exit status 7 (67.533042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-636000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.03s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-531000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-531000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.106186458s)

                                                
                                                
-- stdout --
	* [multinode-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-531000" primary control-plane node in "multinode-531000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-531000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:49:01.562285    3522 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:49:01.562412    3522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:49:01.562416    3522 out.go:358] Setting ErrFile to fd 2...
	I0829 11:49:01.562418    3522 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:49:01.562538    3522 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:49:01.563628    3522 out.go:352] Setting JSON to false
	I0829 11:49:01.579690    3522 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":2905,"bootTime":1724954436,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:49:01.579762    3522 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:49:01.586888    3522 out.go:177] * [multinode-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:49:01.595722    3522 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:49:01.595765    3522 notify.go:220] Checking for updates...
	I0829 11:49:01.600738    3522 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:49:01.603701    3522 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:49:01.606712    3522 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:49:01.609681    3522 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:49:01.612653    3522 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:49:01.615932    3522 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:49:01.620665    3522 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 11:49:01.627722    3522 start.go:297] selected driver: qemu2
	I0829 11:49:01.627730    3522 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:49:01.627749    3522 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:49:01.630091    3522 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:49:01.632666    3522 out.go:177] * Automatically selected the socket_vmnet network
	I0829 11:49:01.635760    3522 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:49:01.635780    3522 cni.go:84] Creating CNI manager for ""
	I0829 11:49:01.635786    3522 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0829 11:49:01.635791    3522 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 11:49:01.635831    3522 start.go:340] cluster config:
	{Name:multinode-531000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:49:01.639620    3522 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:49:01.645672    3522 out.go:177] * Starting "multinode-531000" primary control-plane node in "multinode-531000" cluster
	I0829 11:49:01.649685    3522 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:49:01.649698    3522 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:49:01.649710    3522 cache.go:56] Caching tarball of preloaded images
	I0829 11:49:01.649770    3522 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:49:01.649776    3522 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:49:01.649975    3522 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/multinode-531000/config.json ...
	I0829 11:49:01.649986    3522 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/multinode-531000/config.json: {Name:mkc1529d415aae4c4886265867b2de5e881e8d22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:49:01.650238    3522 start.go:360] acquireMachinesLock for multinode-531000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:49:01.650274    3522 start.go:364] duration metric: took 30µs to acquireMachinesLock for "multinode-531000"
	I0829 11:49:01.650288    3522 start.go:93] Provisioning new machine with config: &{Name:multinode-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:49:01.650319    3522 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:49:01.658658    3522 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 11:49:01.676816    3522 start.go:159] libmachine.API.Create for "multinode-531000" (driver="qemu2")
	I0829 11:49:01.676844    3522 client.go:168] LocalClient.Create starting
	I0829 11:49:01.676912    3522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:49:01.676941    3522 main.go:141] libmachine: Decoding PEM data...
	I0829 11:49:01.676950    3522 main.go:141] libmachine: Parsing certificate...
	I0829 11:49:01.676985    3522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:49:01.677007    3522 main.go:141] libmachine: Decoding PEM data...
	I0829 11:49:01.677016    3522 main.go:141] libmachine: Parsing certificate...
	I0829 11:49:01.677374    3522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:49:01.836587    3522 main.go:141] libmachine: Creating SSH key...
	I0829 11:49:02.111928    3522 main.go:141] libmachine: Creating Disk image...
	I0829 11:49:02.111936    3522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:49:02.112192    3522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:49:02.122710    3522 main.go:141] libmachine: STDOUT: 
	I0829 11:49:02.122730    3522 main.go:141] libmachine: STDERR: 
	I0829 11:49:02.122800    3522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2 +20000M
	I0829 11:49:02.131052    3522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:49:02.131066    3522 main.go:141] libmachine: STDERR: 
	I0829 11:49:02.131077    3522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:49:02.131083    3522 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:49:02.131096    3522 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:49:02.131136    3522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:87:ad:da:85:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:49:02.132813    3522 main.go:141] libmachine: STDOUT: 
	I0829 11:49:02.132827    3522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:49:02.132844    3522 client.go:171] duration metric: took 456.001584ms to LocalClient.Create
	I0829 11:49:04.134990    3522 start.go:128] duration metric: took 2.484685458s to createHost
	I0829 11:49:04.135051    3522 start.go:83] releasing machines lock for "multinode-531000", held for 2.484802792s
	W0829 11:49:04.135106    3522 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:49:04.145223    3522 out.go:177] * Deleting "multinode-531000" in qemu2 ...
	W0829 11:49:04.186090    3522 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:49:04.186114    3522 start.go:729] Will try again in 5 seconds ...
	I0829 11:49:09.188228    3522 start.go:360] acquireMachinesLock for multinode-531000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:49:09.188724    3522 start.go:364] duration metric: took 373.459µs to acquireMachinesLock for "multinode-531000"
	I0829 11:49:09.188851    3522 start.go:93] Provisioning new machine with config: &{Name:multinode-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.0 ClusterName:multinode-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:49:09.189119    3522 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:49:09.209627    3522 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 11:49:09.263039    3522 start.go:159] libmachine.API.Create for "multinode-531000" (driver="qemu2")
	I0829 11:49:09.263086    3522 client.go:168] LocalClient.Create starting
	I0829 11:49:09.263211    3522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:49:09.263264    3522 main.go:141] libmachine: Decoding PEM data...
	I0829 11:49:09.263288    3522 main.go:141] libmachine: Parsing certificate...
	I0829 11:49:09.263360    3522 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:49:09.263403    3522 main.go:141] libmachine: Decoding PEM data...
	I0829 11:49:09.263414    3522 main.go:141] libmachine: Parsing certificate...
	I0829 11:49:09.263942    3522 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:49:09.431962    3522 main.go:141] libmachine: Creating SSH key...
	I0829 11:49:09.571130    3522 main.go:141] libmachine: Creating Disk image...
	I0829 11:49:09.571140    3522 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:49:09.571314    3522 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:49:09.580670    3522 main.go:141] libmachine: STDOUT: 
	I0829 11:49:09.580697    3522 main.go:141] libmachine: STDERR: 
	I0829 11:49:09.580751    3522 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2 +20000M
	I0829 11:49:09.588928    3522 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:49:09.588944    3522 main.go:141] libmachine: STDERR: 
	I0829 11:49:09.588960    3522 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:49:09.588965    3522 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:49:09.588976    3522 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:49:09.589010    3522 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c1:13:22:1b:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:49:09.590639    3522 main.go:141] libmachine: STDOUT: 
	I0829 11:49:09.590655    3522 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:49:09.590668    3522 client.go:171] duration metric: took 327.580917ms to LocalClient.Create
	I0829 11:49:11.592822    3522 start.go:128] duration metric: took 2.40368975s to createHost
	I0829 11:49:11.592873    3522 start.go:83] releasing machines lock for "multinode-531000", held for 2.4041575s
	W0829 11:49:11.593218    3522 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-531000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:49:11.609670    3522 out.go:201] 
	W0829 11:49:11.614793    3522 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:49:11.614817    3522 out.go:270] * 
	* 
	W0829 11:49:11.617540    3522 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:49:11.626503    3522 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-531000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (66.271416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (81.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (127.539916ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-531000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- rollout status deployment/busybox: exit status 1 (58.162417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.248125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.967167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.808125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.096625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.078542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.176542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.019209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.92675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.673458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.968666ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.706958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- exec  -- nslookup kubernetes.io: exit status 1 (55.633083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- exec  -- nslookup kubernetes.default: exit status 1 (56.843375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.045708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (29.168875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (81.54s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-531000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (55.969875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (29.041875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-531000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-531000 -v 3 --alsologtostderr: exit status 83 (44.111208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-531000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:33.365766    3612 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:33.365920    3612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.365923    3612 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:33.365926    3612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.366055    3612 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:33.366323    3612 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:33.366504    3612 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:33.372501    3612 out.go:177] * The control-plane node multinode-531000 host is not running: state=Stopped
	I0829 11:50:33.376662    3612 out.go:177]   To start a cluster, run: "minikube start -p multinode-531000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-531000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (29.344291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-531000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-531000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (28.509459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-531000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-531000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-531000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (29.7895ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-531000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-531000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-531000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.0\",\"ClusterName\":\"multinode-531000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.31.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (28.95125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status --output json --alsologtostderr: exit status 7 (30.261417ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-531000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:33.573080    3624 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:33.573247    3624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.573250    3624 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:33.573252    3624 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.573388    3624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:33.573526    3624 out.go:352] Setting JSON to true
	I0829 11:50:33.573536    3624 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:33.573603    3624 notify.go:220] Checking for updates...
	I0829 11:50:33.573728    3624 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:33.573735    3624 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:33.573941    3624 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:33.573945    3624 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:33.573947    3624 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-531000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (29.186416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 node stop m03: exit status 85 (46.881958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-531000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status: exit status 7 (30.203917ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr: exit status 7 (29.485416ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:33.709714    3632 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:33.709862    3632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.709869    3632 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:33.709871    3632 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.710012    3632 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:33.710139    3632 out.go:352] Setting JSON to false
	I0829 11:50:33.710152    3632 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:33.710212    3632 notify.go:220] Checking for updates...
	I0829 11:50:33.710342    3632 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:33.710349    3632 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:33.710558    3632 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:33.710561    3632 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:33.710564    3632 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr": multinode-531000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (29.197542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (40.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 node start m03 -v=7 --alsologtostderr: exit status 85 (46.527625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:33.769340    3636 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:33.769581    3636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.769585    3636 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:33.769587    3636 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.769727    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:33.769969    3636 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:33.770175    3636 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:33.774655    3636 out.go:201] 
	W0829 11:50:33.777634    3636 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0829 11:50:33.777639    3636 out.go:270] * 
	* 
	W0829 11:50:33.779267    3636 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:50:33.782584    3636 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0829 11:50:33.769340    3636 out.go:345] Setting OutFile to fd 1 ...
I0829 11:50:33.769581    3636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:50:33.769585    3636 out.go:358] Setting ErrFile to fd 2...
I0829 11:50:33.769587    3636 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:50:33.769727    3636 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
I0829 11:50:33.769969    3636 mustload.go:65] Loading cluster: multinode-531000
I0829 11:50:33.770175    3636 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:50:33.774655    3636 out.go:201] 
W0829 11:50:33.777634    3636 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0829 11:50:33.777639    3636 out.go:270] * 
* 
W0829 11:50:33.779267    3636 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0829 11:50:33.782584    3636 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-531000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (29.039291ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:33.814945    3638 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:33.815098    3638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.815102    3638 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:33.815104    3638 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:33.815239    3638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:33.815368    3638 out.go:352] Setting JSON to false
	I0829 11:50:33.815378    3638 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:33.815435    3638 notify.go:220] Checking for updates...
	I0829 11:50:33.815581    3638 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:33.815592    3638 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:33.815815    3638 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:33.815818    3638 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:33.815821    3638 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (74.037916ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:35.095128    3640 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:35.095345    3640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:35.095350    3640 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:35.095354    3640 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:35.095531    3640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:35.095713    3640 out.go:352] Setting JSON to false
	I0829 11:50:35.095728    3640 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:35.095775    3640 notify.go:220] Checking for updates...
	I0829 11:50:35.096035    3640 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:35.096045    3640 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:35.096349    3640 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:35.096355    3640 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:35.096358    3640 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (73.003875ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:37.044825    3642 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:37.045037    3642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:37.045041    3642 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:37.045045    3642 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:37.045225    3642 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:37.045385    3642 out.go:352] Setting JSON to false
	I0829 11:50:37.045397    3642 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:37.045443    3642 notify.go:220] Checking for updates...
	I0829 11:50:37.045651    3642 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:37.045660    3642 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:37.045950    3642 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:37.045954    3642 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:37.045957    3642 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (71.159459ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:38.284118    3644 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:38.284337    3644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:38.284342    3644 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:38.284345    3644 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:38.284513    3644 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:38.284674    3644 out.go:352] Setting JSON to false
	I0829 11:50:38.284688    3644 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:38.284732    3644 notify.go:220] Checking for updates...
	I0829 11:50:38.284932    3644 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:38.284941    3644 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:38.285231    3644 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:38.285235    3644 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:38.285239    3644 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (73.280583ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:41.672944    3646 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:41.673154    3646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:41.673159    3646 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:41.673162    3646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:41.673364    3646 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:41.673550    3646 out.go:352] Setting JSON to false
	I0829 11:50:41.673565    3646 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:41.673617    3646 notify.go:220] Checking for updates...
	I0829 11:50:41.673853    3646 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:41.673862    3646 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:41.674134    3646 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:41.674139    3646 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:41.674151    3646 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (74.166375ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:47.985705    3651 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:47.985931    3651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:47.985936    3651 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:47.985939    3651 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:47.986122    3651 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:47.986288    3651 out.go:352] Setting JSON to false
	I0829 11:50:47.986301    3651 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:47.986341    3651 notify.go:220] Checking for updates...
	I0829 11:50:47.986571    3651 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:47.986579    3651 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:47.986877    3651 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:47.986882    3651 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:47.986886    3651 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (70.510041ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:50:58.405183    3653 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:50:58.405404    3653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:58.405409    3653 out.go:358] Setting ErrFile to fd 2...
	I0829 11:50:58.405412    3653 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:50:58.405581    3653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:50:58.405727    3653 out.go:352] Setting JSON to false
	I0829 11:50:58.405741    3653 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:50:58.405779    3653 notify.go:220] Checking for updates...
	I0829 11:50:58.406016    3653 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:50:58.406027    3653 status.go:255] checking status of multinode-531000 ...
	I0829 11:50:58.406290    3653 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:50:58.406295    3653 status.go:343] host is not running, skipping remaining checks
	I0829 11:50:58.406298    3653 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr: exit status 7 (72.603916ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:51:14.095391    3659 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:51:14.095614    3659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:14.095619    3659 out.go:358] Setting ErrFile to fd 2...
	I0829 11:51:14.095622    3659 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:14.095791    3659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:51:14.095948    3659 out.go:352] Setting JSON to false
	I0829 11:51:14.095961    3659 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:51:14.095997    3659 notify.go:220] Checking for updates...
	I0829 11:51:14.096221    3659 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:51:14.096231    3659 status.go:255] checking status of multinode-531000 ...
	I0829 11:51:14.096496    3659 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:51:14.096501    3659 status.go:343] host is not running, skipping remaining checks
	I0829 11:51:14.096504    3659 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-531000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (32.108792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (40.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-531000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-531000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-531000: (2.103105334s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-531000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-531000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.213689667s)

                                                
                                                
-- stdout --
	* [multinode-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-531000" primary control-plane node in "multinode-531000" cluster
	* Restarting existing qemu2 VM for "multinode-531000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-531000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:51:16.322569    3679 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:51:16.322739    3679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:16.322744    3679 out.go:358] Setting ErrFile to fd 2...
	I0829 11:51:16.322747    3679 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:16.322910    3679 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:51:16.324191    3679 out.go:352] Setting JSON to false
	I0829 11:51:16.343872    3679 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3040,"bootTime":1724954436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:51:16.343950    3679 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:51:16.348268    3679 out.go:177] * [multinode-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:51:16.354151    3679 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:51:16.354215    3679 notify.go:220] Checking for updates...
	I0829 11:51:16.361058    3679 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:51:16.364076    3679 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:51:16.367117    3679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:51:16.368480    3679 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:51:16.371095    3679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:51:16.374386    3679 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:51:16.374454    3679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:51:16.378946    3679 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:51:16.386109    3679 start.go:297] selected driver: qemu2
	I0829 11:51:16.386117    3679 start.go:901] validating driver "qemu2" against &{Name:multinode-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:51:16.386186    3679 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:51:16.388605    3679 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:51:16.388665    3679 cni.go:84] Creating CNI manager for ""
	I0829 11:51:16.388670    3679 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 11:51:16.388713    3679 start.go:340] cluster config:
	{Name:multinode-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-531000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:51:16.392568    3679 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:16.400067    3679 out.go:177] * Starting "multinode-531000" primary control-plane node in "multinode-531000" cluster
	I0829 11:51:16.404141    3679 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:51:16.404154    3679 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:51:16.404163    3679 cache.go:56] Caching tarball of preloaded images
	I0829 11:51:16.404220    3679 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:51:16.404225    3679 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:51:16.404280    3679 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/multinode-531000/config.json ...
	I0829 11:51:16.404723    3679 start.go:360] acquireMachinesLock for multinode-531000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:51:16.404757    3679 start.go:364] duration metric: took 28.458µs to acquireMachinesLock for "multinode-531000"
	I0829 11:51:16.404766    3679 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:51:16.404772    3679 fix.go:54] fixHost starting: 
	I0829 11:51:16.404890    3679 fix.go:112] recreateIfNeeded on multinode-531000: state=Stopped err=<nil>
	W0829 11:51:16.404898    3679 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:51:16.409141    3679 out.go:177] * Restarting existing qemu2 VM for "multinode-531000" ...
	I0829 11:51:16.417081    3679 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:51:16.417119    3679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c1:13:22:1b:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:51:16.419234    3679 main.go:141] libmachine: STDOUT: 
	I0829 11:51:16.419253    3679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:51:16.419280    3679 fix.go:56] duration metric: took 14.510459ms for fixHost
	I0829 11:51:16.419284    3679 start.go:83] releasing machines lock for "multinode-531000", held for 14.52275ms
	W0829 11:51:16.419291    3679 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:51:16.419326    3679 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:51:16.419331    3679 start.go:729] Will try again in 5 seconds ...
	I0829 11:51:21.421528    3679 start.go:360] acquireMachinesLock for multinode-531000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:51:21.422023    3679 start.go:364] duration metric: took 339.542µs to acquireMachinesLock for "multinode-531000"
	I0829 11:51:21.422168    3679 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:51:21.422197    3679 fix.go:54] fixHost starting: 
	I0829 11:51:21.422909    3679 fix.go:112] recreateIfNeeded on multinode-531000: state=Stopped err=<nil>
	W0829 11:51:21.422939    3679 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:51:21.427521    3679 out.go:177] * Restarting existing qemu2 VM for "multinode-531000" ...
	I0829 11:51:21.431492    3679 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:51:21.431748    3679 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c1:13:22:1b:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:51:21.441303    3679 main.go:141] libmachine: STDOUT: 
	I0829 11:51:21.441395    3679 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:51:21.441500    3679 fix.go:56] duration metric: took 19.308417ms for fixHost
	I0829 11:51:21.441517    3679 start.go:83] releasing machines lock for "multinode-531000", held for 19.4665ms
	W0829 11:51:21.441763    3679 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-531000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-531000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:51:21.449440    3679 out.go:201] 
	W0829 11:51:21.452558    3679 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:51:21.452584    3679 out.go:270] * 
	* 
	W0829 11:51:21.455501    3679 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:51:21.463275    3679 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-531000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-531000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (33.000958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 node delete m03: exit status 83 (39.390166ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-531000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-531000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr: exit status 7 (28.878375ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:51:21.646636    3695 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:51:21.646793    3695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:21.646796    3695 out.go:358] Setting ErrFile to fd 2...
	I0829 11:51:21.646798    3695 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:21.646941    3695 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:51:21.647059    3695 out.go:352] Setting JSON to false
	I0829 11:51:21.647070    3695 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:51:21.647134    3695 notify.go:220] Checking for updates...
	I0829 11:51:21.647273    3695 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:51:21.647280    3695 status.go:255] checking status of multinode-531000 ...
	I0829 11:51:21.647481    3695 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:51:21.647485    3695 status.go:343] host is not running, skipping remaining checks
	I0829 11:51:21.647488    3695 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (29.828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-531000 stop: (2.049690959s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status: exit status 7 (63.106459ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr: exit status 7 (32.299833ms)

                                                
                                                
-- stdout --
	multinode-531000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:51:23.822061    3713 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:51:23.822208    3713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:23.822211    3713 out.go:358] Setting ErrFile to fd 2...
	I0829 11:51:23.822213    3713 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:23.822342    3713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:51:23.822468    3713 out.go:352] Setting JSON to false
	I0829 11:51:23.822478    3713 mustload.go:65] Loading cluster: multinode-531000
	I0829 11:51:23.822531    3713 notify.go:220] Checking for updates...
	I0829 11:51:23.822671    3713 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:51:23.822678    3713 status.go:255] checking status of multinode-531000 ...
	I0829 11:51:23.822882    3713 status.go:330] multinode-531000 host status = "Stopped" (err=<nil>)
	I0829 11:51:23.822888    3713 status.go:343] host is not running, skipping remaining checks
	I0829 11:51:23.822890    3713 status.go:257] multinode-531000 status: &{Name:multinode-531000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr": multinode-531000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-531000 status --alsologtostderr": multinode-531000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (28.779167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-531000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-531000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.182697917s)

                                                
                                                
-- stdout --
	* [multinode-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-531000" primary control-plane node in "multinode-531000" cluster
	* Restarting existing qemu2 VM for "multinode-531000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-531000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:51:23.879396    3717 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:51:23.879511    3717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:23.879514    3717 out.go:358] Setting ErrFile to fd 2...
	I0829 11:51:23.879517    3717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:23.879652    3717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:51:23.880657    3717 out.go:352] Setting JSON to false
	I0829 11:51:23.896620    3717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3047,"bootTime":1724954436,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:51:23.896725    3717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:51:23.902349    3717 out.go:177] * [multinode-531000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:51:23.908243    3717 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:51:23.908325    3717 notify.go:220] Checking for updates...
	I0829 11:51:23.916168    3717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:51:23.919369    3717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:51:23.922273    3717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:51:23.925239    3717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:51:23.928266    3717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:51:23.931504    3717 config.go:182] Loaded profile config "multinode-531000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:51:23.931759    3717 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:51:23.935208    3717 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:51:23.942305    3717 start.go:297] selected driver: qemu2
	I0829 11:51:23.942312    3717 start.go:901] validating driver "qemu2" against &{Name:multinode-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:multinode-531000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:51:23.942379    3717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:51:23.944574    3717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:51:23.944600    3717 cni.go:84] Creating CNI manager for ""
	I0829 11:51:23.944605    3717 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0829 11:51:23.944659    3717 start.go:340] cluster config:
	{Name:multinode-531000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:multinode-531000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:51:23.948138    3717 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:23.956210    3717 out.go:177] * Starting "multinode-531000" primary control-plane node in "multinode-531000" cluster
	I0829 11:51:23.960255    3717 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:51:23.960271    3717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:51:23.960282    3717 cache.go:56] Caching tarball of preloaded images
	I0829 11:51:23.960339    3717 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:51:23.960347    3717 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:51:23.960407    3717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/multinode-531000/config.json ...
	I0829 11:51:23.960865    3717 start.go:360] acquireMachinesLock for multinode-531000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:51:23.960899    3717 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "multinode-531000"
	I0829 11:51:23.960908    3717 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:51:23.960913    3717 fix.go:54] fixHost starting: 
	I0829 11:51:23.961028    3717 fix.go:112] recreateIfNeeded on multinode-531000: state=Stopped err=<nil>
	W0829 11:51:23.961037    3717 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:51:23.969290    3717 out.go:177] * Restarting existing qemu2 VM for "multinode-531000" ...
	I0829 11:51:23.973211    3717 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:51:23.973254    3717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c1:13:22:1b:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:51:23.975340    3717 main.go:141] libmachine: STDOUT: 
	I0829 11:51:23.975367    3717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:51:23.975402    3717 fix.go:56] duration metric: took 14.4895ms for fixHost
	I0829 11:51:23.975406    3717 start.go:83] releasing machines lock for "multinode-531000", held for 14.502958ms
	W0829 11:51:23.975421    3717 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:51:23.975456    3717 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:51:23.975461    3717 start.go:729] Will try again in 5 seconds ...
	I0829 11:51:28.977571    3717 start.go:360] acquireMachinesLock for multinode-531000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:51:28.977946    3717 start.go:364] duration metric: took 297.208µs to acquireMachinesLock for "multinode-531000"
	I0829 11:51:28.978088    3717 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:51:28.978107    3717 fix.go:54] fixHost starting: 
	I0829 11:51:28.978786    3717 fix.go:112] recreateIfNeeded on multinode-531000: state=Stopped err=<nil>
	W0829 11:51:28.978813    3717 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:51:28.984318    3717 out.go:177] * Restarting existing qemu2 VM for "multinode-531000" ...
	I0829 11:51:28.992313    3717 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:51:28.992638    3717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:c1:13:22:1b:11 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/multinode-531000/disk.qcow2
	I0829 11:51:29.001665    3717 main.go:141] libmachine: STDOUT: 
	I0829 11:51:29.001772    3717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:51:29.001855    3717 fix.go:56] duration metric: took 23.748583ms for fixHost
	I0829 11:51:29.001879    3717 start.go:83] releasing machines lock for "multinode-531000", held for 23.909167ms
	W0829 11:51:29.002080    3717 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-531000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-531000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:51:29.008308    3717 out.go:201] 
	W0829 11:51:29.011297    3717 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:51:29.011331    3717 out.go:270] * 
	* 
	W0829 11:51:29.013812    3717 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:51:29.022257    3717 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-531000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (70.757625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-531000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-531000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-531000-m01 --driver=qemu2 : exit status 80 (9.84873575s)

                                                
                                                
-- stdout --
	* [multinode-531000-m01] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-531000-m01" primary control-plane node in "multinode-531000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-531000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-531000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-531000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-531000-m02 --driver=qemu2 : exit status 80 (9.908123667s)

                                                
                                                
-- stdout --
	* [multinode-531000-m02] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-531000-m02" primary control-plane node in "multinode-531000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-531000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-531000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-531000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-531000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-531000: exit status 83 (78.958083ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-531000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-531000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-531000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-531000 -n multinode-531000: exit status 7 (30.113958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-531000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (19.98s)

                                                
                                    
x
+
TestPreload (9.91s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-106000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-106000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.758449s)

                                                
                                                
-- stdout --
	* [test-preload-106000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-106000" primary control-plane node in "test-preload-106000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-106000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:51:49.223802    3776 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:51:49.223939    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:49.223942    3776 out.go:358] Setting ErrFile to fd 2...
	I0829 11:51:49.223945    3776 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:51:49.224065    3776 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:51:49.225134    3776 out.go:352] Setting JSON to false
	I0829 11:51:49.241264    3776 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3073,"bootTime":1724954436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:51:49.241325    3776 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:51:49.247254    3776 out.go:177] * [test-preload-106000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:51:49.256084    3776 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:51:49.256122    3776 notify.go:220] Checking for updates...
	I0829 11:51:49.264014    3776 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:51:49.267060    3776 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:51:49.271041    3776 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:51:49.274014    3776 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:51:49.277088    3776 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:51:49.280337    3776 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:51:49.280404    3776 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:51:49.285037    3776 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 11:51:49.292097    3776 start.go:297] selected driver: qemu2
	I0829 11:51:49.292104    3776 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:51:49.292120    3776 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:51:49.294418    3776 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:51:49.299061    3776 out.go:177] * Automatically selected the socket_vmnet network
	I0829 11:51:49.302143    3776 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 11:51:49.302170    3776 cni.go:84] Creating CNI manager for ""
	I0829 11:51:49.302179    3776 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:51:49.302184    3776 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 11:51:49.302216    3776 start.go:340] cluster config:
	{Name:test-preload-106000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-106000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:51:49.305869    3776 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.314996    3776 out.go:177] * Starting "test-preload-106000" primary control-plane node in "test-preload-106000" cluster
	I0829 11:51:49.319096    3776 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0829 11:51:49.319200    3776 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/test-preload-106000/config.json ...
	I0829 11:51:49.319222    3776 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/test-preload-106000/config.json: {Name:mk29ac84de38dda352eccc395073bced38bff14f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:51:49.319214    3776 cache.go:107] acquiring lock: {Name:mk43611890887523ca89f123aa3a4398077d7dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319217    3776 cache.go:107] acquiring lock: {Name:mk86eab30c86ffb0e5394445b64f3f9bfbcf7cd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319286    3776 cache.go:107] acquiring lock: {Name:mk760e69cf7d48b62976e2b1a98d98ee3d33d20d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319415    3776 cache.go:107] acquiring lock: {Name:mke4777ed4eca1d0c78c915b9f2d002dd0346e1c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319482    3776 cache.go:107] acquiring lock: {Name:mk555b4b81f63a494835152b6028588b074b17d6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319516    3776 cache.go:107] acquiring lock: {Name:mkd2a782066f37a2f7e0ff2af1290f0e32c5b804 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319554    3776 start.go:360] acquireMachinesLock for test-preload-106000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:51:49.319562    3776 cache.go:107] acquiring lock: {Name:mkb6e3cbd291d9788e7e5c6f64f61c22a73bb585 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319591    3776 start.go:364] duration metric: took 32.084µs to acquireMachinesLock for "test-preload-106000"
	I0829 11:51:49.319639    3776 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:51:49.319643    3776 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0829 11:51:49.319647    3776 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:51:49.319684    3776 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0829 11:51:49.319674    3776 cache.go:107] acquiring lock: {Name:mk6abe8ae37a1aaba29503195b54f1a5197aac82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:51:49.319707    3776 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0829 11:51:49.319746    3776 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0829 11:51:49.319669    3776 start.go:93] Provisioning new machine with config: &{Name:test-preload-106000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-106000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:51:49.319818    3776 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:51:49.319862    3776 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0829 11:51:49.319944    3776 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:51:49.327880    3776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 11:51:49.331708    3776 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0829 11:51:49.332543    3776 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0829 11:51:49.334041    3776 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:51:49.334096    3776 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0829 11:51:49.334138    3776 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0829 11:51:49.334136    3776 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:51:49.334170    3776 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:51:49.334238    3776 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0829 11:51:49.348207    3776 start.go:159] libmachine.API.Create for "test-preload-106000" (driver="qemu2")
	I0829 11:51:49.348228    3776 client.go:168] LocalClient.Create starting
	I0829 11:51:49.348313    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:51:49.348347    3776 main.go:141] libmachine: Decoding PEM data...
	I0829 11:51:49.348356    3776 main.go:141] libmachine: Parsing certificate...
	I0829 11:51:49.348398    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:51:49.348422    3776 main.go:141] libmachine: Decoding PEM data...
	I0829 11:51:49.348430    3776 main.go:141] libmachine: Parsing certificate...
	I0829 11:51:49.348798    3776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:51:49.509074    3776 main.go:141] libmachine: Creating SSH key...
	I0829 11:51:49.558693    3776 main.go:141] libmachine: Creating Disk image...
	I0829 11:51:49.558712    3776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:51:49.558916    3776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2
	I0829 11:51:49.568546    3776 main.go:141] libmachine: STDOUT: 
	I0829 11:51:49.568580    3776 main.go:141] libmachine: STDERR: 
	I0829 11:51:49.568662    3776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2 +20000M
	I0829 11:51:49.578319    3776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:51:49.578339    3776 main.go:141] libmachine: STDERR: 
	I0829 11:51:49.578354    3776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2
	I0829 11:51:49.578358    3776 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:51:49.578374    3776 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:51:49.578407    3776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:a5:f7:14:4f:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2
	I0829 11:51:49.580273    3776 main.go:141] libmachine: STDOUT: 
	I0829 11:51:49.580290    3776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:51:49.580307    3776 client.go:171] duration metric: took 232.0795ms to LocalClient.Create
	I0829 11:51:50.346657    3776 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0829 11:51:50.387649    3776 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0829 11:51:50.401056    3776 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0829 11:51:50.420915    3776 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0829 11:51:50.497394    3776 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0829 11:51:50.497467    3776 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 1.178199833s
	I0829 11:51:50.497510    3776 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W0829 11:51:50.565611    3776 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0829 11:51:50.565736    3776 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0829 11:51:50.581861    3776 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0829 11:51:50.605768    3776 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0829 11:51:51.580589    3776 start.go:128] duration metric: took 2.260777042s to createHost
	I0829 11:51:51.580637    3776 start.go:83] releasing machines lock for "test-preload-106000", held for 2.261069375s
	W0829 11:51:51.580699    3776 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:51:51.592977    3776 out.go:177] * Deleting "test-preload-106000" in qemu2 ...
	W0829 11:51:51.623769    3776 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:51:51.623817    3776 start.go:729] Will try again in 5 seconds ...
	I0829 11:51:52.268405    3776 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0829 11:51:52.268545    3776 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.949117166s
	I0829 11:51:52.268577    3776 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0829 11:51:52.734527    3776 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0829 11:51:52.734601    3776 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 3.415227208s
	I0829 11:51:52.734632    3776 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0829 11:51:53.713252    3776 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0829 11:51:53.713329    3776 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.394184333s
	I0829 11:51:53.713359    3776 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0829 11:51:54.617610    3776 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0829 11:51:54.617673    3776 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.298072083s
	I0829 11:51:54.617700    3776 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0829 11:51:56.623935    3776 start.go:360] acquireMachinesLock for test-preload-106000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:51:56.624334    3776 start.go:364] duration metric: took 339.292µs to acquireMachinesLock for "test-preload-106000"
	I0829 11:51:56.624458    3776 start.go:93] Provisioning new machine with config: &{Name:test-preload-106000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-106000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:51:56.624680    3776 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:51:56.636273    3776 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 11:51:56.639166    3776 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0829 11:51:56.639287    3776 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 7.319845375s
	I0829 11:51:56.639312    3776 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0829 11:51:56.686830    3776 start.go:159] libmachine.API.Create for "test-preload-106000" (driver="qemu2")
	I0829 11:51:56.686890    3776 client.go:168] LocalClient.Create starting
	I0829 11:51:56.687018    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:51:56.687079    3776 main.go:141] libmachine: Decoding PEM data...
	I0829 11:51:56.687093    3776 main.go:141] libmachine: Parsing certificate...
	I0829 11:51:56.687149    3776 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:51:56.687193    3776 main.go:141] libmachine: Decoding PEM data...
	I0829 11:51:56.687207    3776 main.go:141] libmachine: Parsing certificate...
	I0829 11:51:56.687701    3776 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:51:56.856881    3776 main.go:141] libmachine: Creating SSH key...
	I0829 11:51:56.884792    3776 main.go:141] libmachine: Creating Disk image...
	I0829 11:51:56.884798    3776 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:51:56.884983    3776 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2
	I0829 11:51:56.894540    3776 main.go:141] libmachine: STDOUT: 
	I0829 11:51:56.894559    3776 main.go:141] libmachine: STDERR: 
	I0829 11:51:56.894612    3776 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2 +20000M
	I0829 11:51:56.902781    3776 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:51:56.902799    3776 main.go:141] libmachine: STDERR: 
	I0829 11:51:56.902808    3776 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2
	I0829 11:51:56.902812    3776 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:51:56.902826    3776 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:51:56.902858    3776 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:ac:80:f3:8e:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/test-preload-106000/disk.qcow2
	I0829 11:51:56.904524    3776 main.go:141] libmachine: STDOUT: 
	I0829 11:51:56.904543    3776 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:51:56.904555    3776 client.go:171] duration metric: took 217.663708ms to LocalClient.Create
	I0829 11:51:57.421327    3776 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0829 11:51:57.421407    3776 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 8.102138s
	I0829 11:51:57.421436    3776 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0829 11:51:58.906755    3776 start.go:128] duration metric: took 2.282053708s to createHost
	I0829 11:51:58.906826    3776 start.go:83] releasing machines lock for "test-preload-106000", held for 2.28249825s
	W0829 11:51:58.907152    3776 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-106000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-106000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:51:58.916703    3776 out.go:201] 
	W0829 11:51:58.926681    3776 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:51:58.926719    3776 out.go:270] * 
	* 
	W0829 11:51:58.929339    3776 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:51:58.939635    3776 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-106000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-08-29 11:51:58.957725 -0700 PDT m=+2843.904363126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-106000 -n test-preload-106000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-106000 -n test-preload-106000: exit status 7 (68.751375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-106000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-106000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-106000
--- FAIL: TestPreload (9.91s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-579000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-579000 --memory=2048 --driver=qemu2 : exit status 80 (9.93982875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-579000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-579000" primary control-plane node in "scheduled-stop-579000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-579000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-579000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-579000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-579000" primary control-plane node in "scheduled-stop-579000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-579000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-579000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-29 11:52:09.048072 -0700 PDT m=+2853.994854668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-579000 -n scheduled-stop-579000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-579000 -n scheduled-stop-579000: exit status 7 (67.269042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-579000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-579000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-579000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (12.73s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2446651782 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2446651782 version: (1.052857958s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-303000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-303000 --memory=2600 --driver=qemu2 : exit status 80 (9.976703458s)

                                                
                                                
-- stdout --
	* [skaffold-303000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-303000" primary control-plane node in "skaffold-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-303000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-303000" primary control-plane node in "skaffold-303000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-303000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-303000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-08-29 11:52:21.785554 -0700 PDT m=+2866.732520793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-303000 -n skaffold-303000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-303000 -n skaffold-303000: exit status 7 (61.409541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-303000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-303000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-303000
--- FAIL: TestSkaffold (12.73s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (655.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3379775673 start -p running-upgrade-373000 --memory=2200 --vm-driver=qemu2 
E0829 11:53:17.062888    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:53:32.543007    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3379775673 start -p running-upgrade-373000 --memory=2200 --vm-driver=qemu2 : (1m23.591480541s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-373000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
E0829 11:56:35.632085    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:58:17.058531    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:58:32.538204    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 12:01:20.150745    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-373000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m58.4209405s)

                                                
                                                
-- stdout --
	* [running-upgrade-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-373000" primary control-plane node in "running-upgrade-373000" cluster
	* Updating the running qemu2 "running-upgrade-373000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:54:08.276839    4119 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:54:08.276977    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:54:08.276986    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:54:08.276989    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:54:08.277130    4119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:54:08.278277    4119 out.go:352] Setting JSON to false
	I0829 11:54:08.296308    4119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3212,"bootTime":1724954436,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:54:08.296427    4119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:54:08.301145    4119 out.go:177] * [running-upgrade-373000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:54:08.304136    4119 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:54:08.304182    4119 notify.go:220] Checking for updates...
	I0829 11:54:08.309896    4119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:54:08.314139    4119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:54:08.317141    4119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:54:08.318218    4119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:54:08.321151    4119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:54:08.324428    4119 config.go:182] Loaded profile config "running-upgrade-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0829 11:54:08.327108    4119 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 11:54:08.330085    4119 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:54:08.334096    4119 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:54:08.344117    4119 start.go:297] selected driver: qemu2
	I0829 11:54:08.344124    4119 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50346 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0829 11:54:08.344173    4119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:54:08.346606    4119 cni.go:84] Creating CNI manager for ""
	I0829 11:54:08.346623    4119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:54:08.346648    4119 start.go:340] cluster config:
	{Name:running-upgrade-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50346 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0829 11:54:08.346697    4119 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:54:08.355135    4119 out.go:177] * Starting "running-upgrade-373000" primary control-plane node in "running-upgrade-373000" cluster
	I0829 11:54:08.358135    4119 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0829 11:54:08.358170    4119 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0829 11:54:08.358183    4119 cache.go:56] Caching tarball of preloaded images
	I0829 11:54:08.358279    4119 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:54:08.358286    4119 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0829 11:54:08.358351    4119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/config.json ...
	I0829 11:54:08.358741    4119 start.go:360] acquireMachinesLock for running-upgrade-373000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:54:17.531736    4119 start.go:364] duration metric: took 9.173102917s to acquireMachinesLock for "running-upgrade-373000"
	I0829 11:54:17.531763    4119 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:54:17.531772    4119 fix.go:54] fixHost starting: 
	I0829 11:54:17.532583    4119 fix.go:112] recreateIfNeeded on running-upgrade-373000: state=Running err=<nil>
	W0829 11:54:17.532592    4119 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:54:17.540570    4119 out.go:177] * Updating the running qemu2 "running-upgrade-373000" VM ...
	I0829 11:54:17.544694    4119 machine.go:93] provisionDockerMachine start ...
	I0829 11:54:17.544743    4119 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:17.544860    4119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010385a0] 0x10103ae00 <nil>  [] 0s} localhost 50289 <nil> <nil>}
	I0829 11:54:17.544865    4119 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 11:54:17.602692    4119 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-373000
	
	I0829 11:54:17.602707    4119 buildroot.go:166] provisioning hostname "running-upgrade-373000"
	I0829 11:54:17.602752    4119 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:17.602883    4119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010385a0] 0x10103ae00 <nil>  [] 0s} localhost 50289 <nil> <nil>}
	I0829 11:54:17.602888    4119 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-373000 && echo "running-upgrade-373000" | sudo tee /etc/hostname
	I0829 11:54:17.664406    4119 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-373000
	
	I0829 11:54:17.664459    4119 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:17.664578    4119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010385a0] 0x10103ae00 <nil>  [] 0s} localhost 50289 <nil> <nil>}
	I0829 11:54:17.664586    4119 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-373000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-373000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-373000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 11:54:17.721203    4119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 11:54:17.721213    4119 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19531-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19531-965/.minikube}
	I0829 11:54:17.721232    4119 buildroot.go:174] setting up certificates
	I0829 11:54:17.721236    4119 provision.go:84] configureAuth start
	I0829 11:54:17.721243    4119 provision.go:143] copyHostCerts
	I0829 11:54:17.721299    4119 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem, removing ...
	I0829 11:54:17.721305    4119 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem
	I0829 11:54:17.721684    4119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem (1082 bytes)
	I0829 11:54:17.721879    4119 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem, removing ...
	I0829 11:54:17.721884    4119 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem
	I0829 11:54:17.721938    4119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem (1123 bytes)
	I0829 11:54:17.722061    4119 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem, removing ...
	I0829 11:54:17.722065    4119 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem
	I0829 11:54:17.722104    4119 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem (1675 bytes)
	I0829 11:54:17.722185    4119 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-373000 san=[127.0.0.1 localhost minikube running-upgrade-373000]
	I0829 11:54:17.886703    4119 provision.go:177] copyRemoteCerts
	I0829 11:54:17.886734    4119 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 11:54:17.886746    4119 sshutil.go:53] new ssh client: &{IP:localhost Port:50289 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/running-upgrade-373000/id_rsa Username:docker}
	I0829 11:54:17.919025    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 11:54:17.926221    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 11:54:17.932947    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0829 11:54:17.940597    4119 provision.go:87] duration metric: took 219.358417ms to configureAuth
	I0829 11:54:17.940608    4119 buildroot.go:189] setting minikube options for container-runtime
	I0829 11:54:17.940726    4119 config.go:182] Loaded profile config "running-upgrade-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0829 11:54:17.940765    4119 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:17.940857    4119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010385a0] 0x10103ae00 <nil>  [] 0s} localhost 50289 <nil> <nil>}
	I0829 11:54:17.940866    4119 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 11:54:17.999896    4119 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0829 11:54:17.999907    4119 buildroot.go:70] root file system type: tmpfs
	I0829 11:54:17.999967    4119 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 11:54:18.000022    4119 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:18.000144    4119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010385a0] 0x10103ae00 <nil>  [] 0s} localhost 50289 <nil> <nil>}
	I0829 11:54:18.000178    4119 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 11:54:18.062917    4119 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 11:54:18.062968    4119 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:18.063090    4119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010385a0] 0x10103ae00 <nil>  [] 0s} localhost 50289 <nil> <nil>}
	I0829 11:54:18.063100    4119 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 11:54:18.122437    4119 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 11:54:18.122450    4119 machine.go:96] duration metric: took 577.758458ms to provisionDockerMachine
	I0829 11:54:18.122455    4119 start.go:293] postStartSetup for "running-upgrade-373000" (driver="qemu2")
	I0829 11:54:18.122461    4119 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 11:54:18.122532    4119 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 11:54:18.122541    4119 sshutil.go:53] new ssh client: &{IP:localhost Port:50289 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/running-upgrade-373000/id_rsa Username:docker}
	I0829 11:54:18.155793    4119 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 11:54:18.157036    4119 info.go:137] Remote host: Buildroot 2021.02.12
	I0829 11:54:18.157044    4119 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/addons for local assets ...
	I0829 11:54:18.157123    4119 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/files for local assets ...
	I0829 11:54:18.157206    4119 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem -> 14182.pem in /etc/ssl/certs
	I0829 11:54:18.157298    4119 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 11:54:18.159995    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem --> /etc/ssl/certs/14182.pem (1708 bytes)
	I0829 11:54:18.166758    4119 start.go:296] duration metric: took 44.298917ms for postStartSetup
	I0829 11:54:18.166793    4119 fix.go:56] duration metric: took 635.035125ms for fixHost
	I0829 11:54:18.166829    4119 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:18.166937    4119 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1010385a0] 0x10103ae00 <nil>  [] 0s} localhost 50289 <nil> <nil>}
	I0829 11:54:18.166943    4119 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 11:54:18.224206    4119 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724957658.260676839
	
	I0829 11:54:18.224216    4119 fix.go:216] guest clock: 1724957658.260676839
	I0829 11:54:18.224220    4119 fix.go:229] Guest: 2024-08-29 11:54:18.260676839 -0700 PDT Remote: 2024-08-29 11:54:18.166794 -0700 PDT m=+9.912397793 (delta=93.882839ms)
	I0829 11:54:18.224232    4119 fix.go:200] guest clock delta is within tolerance: 93.882839ms
	I0829 11:54:18.224235    4119 start.go:83] releasing machines lock for "running-upgrade-373000", held for 692.492334ms
	I0829 11:54:18.224307    4119 ssh_runner.go:195] Run: cat /version.json
	I0829 11:54:18.224309    4119 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 11:54:18.224316    4119 sshutil.go:53] new ssh client: &{IP:localhost Port:50289 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/running-upgrade-373000/id_rsa Username:docker}
	I0829 11:54:18.224330    4119 sshutil.go:53] new ssh client: &{IP:localhost Port:50289 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/running-upgrade-373000/id_rsa Username:docker}
	W0829 11:54:18.225078    4119 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50289: connect: connection refused
	I0829 11:54:18.225101    4119 retry.go:31] will retry after 190.197992ms: dial tcp [::1]:50289: connect: connection refused
	W0829 11:54:18.446838    4119 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0829 11:54:18.446923    4119 ssh_runner.go:195] Run: systemctl --version
	I0829 11:54:18.448871    4119 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 11:54:18.450471    4119 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 11:54:18.450497    4119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0829 11:54:18.453666    4119 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0829 11:54:18.457872    4119 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 11:54:18.457880    4119 start.go:495] detecting cgroup driver to use...
	I0829 11:54:18.457953    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:54:18.463458    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0829 11:54:18.467107    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 11:54:18.470343    4119 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 11:54:18.470372    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 11:54:18.473422    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:54:18.476275    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 11:54:18.479427    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:54:18.482955    4119 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 11:54:18.486253    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 11:54:18.489705    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 11:54:18.492462    4119 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 11:54:18.495585    4119 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 11:54:18.498364    4119 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 11:54:18.501141    4119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:18.595350    4119 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 11:54:18.606359    4119 start.go:495] detecting cgroup driver to use...
	I0829 11:54:18.606434    4119 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 11:54:18.612259    4119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:54:18.617208    4119 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 11:54:18.623452    4119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:54:18.628265    4119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 11:54:18.633209    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:54:18.638605    4119 ssh_runner.go:195] Run: which cri-dockerd
	I0829 11:54:18.639811    4119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 11:54:18.642394    4119 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0829 11:54:18.647299    4119 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 11:54:18.737292    4119 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 11:54:18.831027    4119 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 11:54:18.831081    4119 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 11:54:18.836226    4119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:18.925634    4119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 11:54:40.394487    4119 ssh_runner.go:235] Completed: sudo systemctl restart docker: (21.469145167s)
	I0829 11:54:40.394542    4119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 11:54:40.399380    4119 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0829 11:54:40.407793    4119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:54:40.414359    4119 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 11:54:40.486929    4119 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 11:54:40.569329    4119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:40.653618    4119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 11:54:40.660370    4119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:54:40.665099    4119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:40.728960    4119 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 11:54:40.773486    4119 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 11:54:40.773561    4119 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 11:54:40.776747    4119 start.go:563] Will wait 60s for crictl version
	I0829 11:54:40.776811    4119 ssh_runner.go:195] Run: which crictl
	I0829 11:54:40.778031    4119 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 11:54:40.789767    4119 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0829 11:54:40.789829    4119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:54:40.802070    4119 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:54:40.820783    4119 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0829 11:54:40.820904    4119 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0829 11:54:40.822384    4119 kubeadm.go:883] updating cluster {Name:running-upgrade-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50346 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0829 11:54:40.822430    4119 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0829 11:54:40.822468    4119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:54:40.832799    4119 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 11:54:40.832808    4119 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0829 11:54:40.832853    4119 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 11:54:40.836473    4119 ssh_runner.go:195] Run: which lz4
	I0829 11:54:40.838047    4119 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 11:54:40.839506    4119 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 11:54:40.839528    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0829 11:54:41.819113    4119 docker.go:649] duration metric: took 981.127125ms to copy over tarball
	I0829 11:54:41.819188    4119 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 11:54:42.962761    4119 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.143575958s)
	I0829 11:54:42.962775    4119 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 11:54:42.978352    4119 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 11:54:42.981461    4119 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0829 11:54:42.986542    4119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:43.079876    4119 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 11:54:44.264025    4119 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.184146541s)
	I0829 11:54:44.264123    4119 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:54:44.278295    4119 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 11:54:44.278305    4119 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0829 11:54:44.278310    4119 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 11:54:44.282185    4119 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:44.283881    4119 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:44.285637    4119 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:44.285658    4119 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:44.287406    4119 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:44.287477    4119 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:44.288747    4119 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:44.288814    4119 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:44.290435    4119 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:44.290539    4119 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0829 11:54:44.292028    4119 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:44.292049    4119 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:44.293287    4119 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:44.293382    4119 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0829 11:54:44.294470    4119 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:44.295710    4119 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:45.260814    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:45.272481    4119 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0829 11:54:45.272509    4119 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:45.272561    4119 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:45.283768    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	W0829 11:54:45.292975    4119 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0829 11:54:45.293105    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:45.303564    4119 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0829 11:54:45.303585    4119 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:45.303641    4119 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:45.315464    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0829 11:54:45.315593    4119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0829 11:54:45.317664    4119 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0829 11:54:45.317677    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0829 11:54:45.323397    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:45.330876    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0829 11:54:45.369371    4119 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0829 11:54:45.369401    4119 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:45.369455    4119 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:45.380973    4119 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0829 11:54:45.381002    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0829 11:54:45.395260    4119 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0829 11:54:45.395285    4119 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0829 11:54:45.395269    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0829 11:54:45.395339    4119 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0829 11:54:45.432888    4119 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0829 11:54:45.432918    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0829 11:54:45.433030    4119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0829 11:54:45.434589    4119 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0829 11:54:45.434600    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0829 11:54:45.441809    4119 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0829 11:54:45.441817    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0829 11:54:45.468964    4119 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0829 11:54:45.495877    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:45.496914    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:45.499985    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:45.515694    4119 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0829 11:54:45.515718    4119 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:45.515780    4119 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:45.520344    4119 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0829 11:54:45.520387    4119 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:45.520445    4119 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:45.521950    4119 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0829 11:54:45.521965    4119 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:45.522010    4119 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	W0829 11:54:45.524786    4119 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0829 11:54:45.524979    4119 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:45.533903    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0829 11:54:45.540205    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0829 11:54:45.540334    4119 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0829 11:54:45.545718    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0829 11:54:45.546898    4119 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0829 11:54:45.546917    4119 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:45.546964    4119 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:45.546982    4119 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0829 11:54:45.546993    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0829 11:54:45.794022    4119 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0829 11:54:45.794039    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0829 11:54:46.794497    4119 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load": (1.000426875s)
	I0829 11:54:46.794559    4119 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0829 11:54:46.794666    4119 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.24767225s)
	I0829 11:54:46.794716    4119 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 11:54:46.794822    4119 cache_images.go:92] duration metric: took 2.516539333s to LoadCachedImages
	W0829 11:54:46.794958    4119 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0829 11:54:46.794974    4119 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0829 11:54:46.795193    4119 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-373000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 11:54:46.795505    4119 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 11:54:46.831372    4119 cni.go:84] Creating CNI manager for ""
	I0829 11:54:46.831393    4119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:54:46.831402    4119 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 11:54:46.831416    4119 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-373000 NodeName:running-upgrade-373000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 11:54:46.831518    4119 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-373000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 11:54:46.831606    4119 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0829 11:54:46.837401    4119 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 11:54:46.837461    4119 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 11:54:46.842796    4119 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0829 11:54:46.849513    4119 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 11:54:46.856007    4119 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0829 11:54:46.862069    4119 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0829 11:54:46.863620    4119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:46.945258    4119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:54:46.950424    4119 certs.go:68] Setting up /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000 for IP: 10.0.2.15
	I0829 11:54:46.950433    4119 certs.go:194] generating shared ca certs ...
	I0829 11:54:46.950441    4119 certs.go:226] acquiring lock for ca certs: {Name:mk29df1c1b696cda1cc19a90487167bb76984cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:46.950610    4119 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key
	I0829 11:54:46.950657    4119 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key
	I0829 11:54:46.950662    4119 certs.go:256] generating profile certs ...
	I0829 11:54:46.950736    4119 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/client.key
	I0829 11:54:46.950756    4119 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.key.a23533c8
	I0829 11:54:46.950764    4119 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.crt.a23533c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0829 11:54:47.028994    4119 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.crt.a23533c8 ...
	I0829 11:54:47.029007    4119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.crt.a23533c8: {Name:mkb99f9406036ea32cdd6901cb9445bd2cf71342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:47.029322    4119 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.key.a23533c8 ...
	I0829 11:54:47.029331    4119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.key.a23533c8: {Name:mk77df10af4c86a63bdc372261b22cdd8c7c9ef9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:47.029481    4119 certs.go:381] copying /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.crt.a23533c8 -> /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.crt
	I0829 11:54:47.029608    4119 certs.go:385] copying /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.key.a23533c8 -> /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.key
	I0829 11:54:47.029759    4119 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/proxy-client.key
	I0829 11:54:47.029891    4119 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418.pem (1338 bytes)
	W0829 11:54:47.029919    4119 certs.go:480] ignoring /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418_empty.pem, impossibly tiny 0 bytes
	I0829 11:54:47.029939    4119 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 11:54:47.029979    4119 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem (1082 bytes)
	I0829 11:54:47.030006    4119 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem (1123 bytes)
	I0829 11:54:47.030032    4119 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem (1675 bytes)
	I0829 11:54:47.030096    4119 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem (1708 bytes)
	I0829 11:54:47.030429    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 11:54:47.037923    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 11:54:47.044972    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 11:54:47.052170    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 11:54:47.059242    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 11:54:47.065577    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0829 11:54:47.072260    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 11:54:47.079632    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 11:54:47.087150    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem --> /usr/share/ca-certificates/14182.pem (1708 bytes)
	I0829 11:54:47.094308    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 11:54:47.101133    4119 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418.pem --> /usr/share/ca-certificates/1418.pem (1338 bytes)
	I0829 11:54:47.108192    4119 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 11:54:47.113536    4119 ssh_runner.go:195] Run: openssl version
	I0829 11:54:47.115374    4119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1418.pem && ln -fs /usr/share/ca-certificates/1418.pem /etc/ssl/certs/1418.pem"
	I0829 11:54:47.118929    4119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1418.pem
	I0829 11:54:47.120451    4119 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:20 /usr/share/ca-certificates/1418.pem
	I0829 11:54:47.120476    4119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1418.pem
	I0829 11:54:47.122585    4119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1418.pem /etc/ssl/certs/51391683.0"
	I0829 11:54:47.125739    4119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14182.pem && ln -fs /usr/share/ca-certificates/14182.pem /etc/ssl/certs/14182.pem"
	I0829 11:54:47.128754    4119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14182.pem
	I0829 11:54:47.130198    4119 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:20 /usr/share/ca-certificates/14182.pem
	I0829 11:54:47.130229    4119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14182.pem
	I0829 11:54:47.131863    4119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14182.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 11:54:47.134943    4119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 11:54:47.138011    4119 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:54:47.139532    4119 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:54:47.139552    4119 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:54:47.141504    4119 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 11:54:47.144277    4119 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 11:54:47.145942    4119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 11:54:47.147948    4119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 11:54:47.149849    4119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 11:54:47.151702    4119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 11:54:47.153689    4119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 11:54:47.155579    4119 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 11:54:47.157471    4119 kubeadm.go:392] StartCluster: {Name:running-upgrade-373000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50346 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0829 11:54:47.157537    4119 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 11:54:47.168667    4119 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 11:54:47.172361    4119 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 11:54:47.172367    4119 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 11:54:47.172391    4119 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 11:54:47.176326    4119 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:54:47.176625    4119 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-373000" does not appear in /Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:54:47.176716    4119 kubeconfig.go:62] /Users/jenkins/minikube-integration/19531-965/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-373000" cluster setting kubeconfig missing "running-upgrade-373000" context setting]
	I0829 11:54:47.176921    4119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/kubeconfig: {Name:mk8af293b3e18a99fbcb2b7e12f57a5251bf5686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:47.177358    4119 kapi.go:59] client config for running-upgrade-373000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025f3f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 11:54:47.177701    4119 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 11:54:47.180703    4119 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-373000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0829 11:54:47.180710    4119 kubeadm.go:1160] stopping kube-system containers ...
	I0829 11:54:47.180758    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 11:54:47.193999    4119 docker.go:483] Stopping containers: [f490ded4c775 61d794536a77 6a3aea228f8f a21d967dfd89 10bf69d629ee 16eb67c2d136 13bce22076e1 df4daf7d4b00 9dd26a5ff741 9f00e1b3f7d6 8dcc015af669 594fc81523c1]
	I0829 11:54:47.194072    4119 ssh_runner.go:195] Run: docker stop f490ded4c775 61d794536a77 6a3aea228f8f a21d967dfd89 10bf69d629ee 16eb67c2d136 13bce22076e1 df4daf7d4b00 9dd26a5ff741 9f00e1b3f7d6 8dcc015af669 594fc81523c1
	I0829 11:54:47.206018    4119 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 11:54:47.291286    4119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 11:54:47.295182    4119 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Aug 29 18:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Aug 29 18:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Aug 29 18:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 29 18:53 /etc/kubernetes/scheduler.conf
	
	I0829 11:54:47.295215    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/admin.conf
	I0829 11:54:47.298295    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:54:47.298320    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 11:54:47.301066    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/kubelet.conf
	I0829 11:54:47.303927    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:54:47.303949    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 11:54:47.307409    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/controller-manager.conf
	I0829 11:54:47.310549    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:54:47.310577    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 11:54:47.313407    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/scheduler.conf
	I0829 11:54:47.316210    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:54:47.316233    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 11:54:47.319559    4119 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 11:54:47.323132    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:47.343946    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:47.838417    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:48.039963    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:48.071554    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:48.101338    4119 api_server.go:52] waiting for apiserver process to appear ...
	I0829 11:54:48.101410    4119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:54:48.603336    4119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:54:49.103454    4119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:54:49.602917    4119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:54:49.607829    4119 api_server.go:72] duration metric: took 1.50651475s to wait for apiserver process to appear ...
	I0829 11:54:49.607840    4119 api_server.go:88] waiting for apiserver healthz status ...
	I0829 11:54:49.607849    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:54.609881    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:54.609912    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:59.610092    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:59.610118    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:04.610307    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:04.610341    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:09.610650    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:09.610682    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:14.611107    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:14.611132    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:19.611723    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:19.611779    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:24.612635    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:24.612657    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:29.613653    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:29.613701    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:34.614225    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:34.614254    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:39.615714    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:39.615745    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:44.617860    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:44.617883    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:49.619987    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:49.620175    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:55:49.631535    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:55:49.631605    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:55:49.642184    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:55:49.642244    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:55:49.652448    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:55:49.652534    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:55:49.663075    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:55:49.663150    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:55:49.674121    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:55:49.674194    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:55:49.684758    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:55:49.684827    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:55:49.696761    4119 logs.go:276] 0 containers: []
	W0829 11:55:49.696777    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:55:49.696837    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:55:49.707040    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:55:49.707055    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:55:49.707061    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:55:49.723438    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:55:49.723448    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:55:49.735669    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:55:49.735680    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:55:49.762200    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:55:49.762210    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:55:49.798158    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:55:49.798254    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:55:49.798915    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:55:49.798923    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:55:49.813574    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:55:49.813587    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:55:49.832024    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:55:49.832037    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:55:49.844077    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:55:49.844093    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:55:49.856547    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:55:49.856558    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:55:49.869739    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:55:49.869750    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:55:49.884184    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:55:49.884197    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:55:49.902648    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:55:49.902660    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:55:49.914843    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:55:49.914857    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:55:49.919570    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:55:49.919577    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:55:49.990210    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:55:49.990224    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:55:50.005037    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:55:50.005048    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:55:50.049351    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:55:50.049361    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:55:50.049385    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:55:50.049390    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:55:50.049401    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:55:50.049407    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:55:50.049415    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:56:00.052870    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:05.054992    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:05.055164    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:05.069429    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:56:05.069505    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:05.080470    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:56:05.080537    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:05.090678    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:56:05.090753    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:05.101188    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:56:05.101257    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:05.111526    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:56:05.111587    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:05.122612    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:56:05.122676    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:05.133679    4119 logs.go:276] 0 containers: []
	W0829 11:56:05.133691    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:05.133750    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:05.148284    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:56:05.148301    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:56:05.148308    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:56:05.166438    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:56:05.166450    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:56:05.204364    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:56:05.204376    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:56:05.221672    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:56:05.221686    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:56:05.237242    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:05.237253    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:05.273309    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:56:05.273323    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:56:05.287036    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:56:05.287050    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:56:05.304083    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:56:05.304097    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:56:05.316587    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:56:05.316598    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:56:05.333976    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:56:05.333987    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:56:05.347294    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:05.347304    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:56:05.384622    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:05.384718    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:05.385441    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:56:05.385449    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:56:05.396922    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:05.396934    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:05.423780    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:56:05.423790    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:05.437083    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:05.437097    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:05.441275    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:56:05.441285    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:56:05.453237    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:05.453249    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:56:05.453276    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:56:05.453281    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:05.453285    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:05.453289    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:05.453294    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:56:15.455446    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:20.458085    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:20.458413    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:20.488281    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:56:20.488398    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:20.506469    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:56:20.506564    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:20.520455    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:56:20.520527    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:20.541095    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:56:20.541169    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:20.551477    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:56:20.551544    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:20.561783    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:56:20.561846    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:20.572550    4119 logs.go:276] 0 containers: []
	W0829 11:56:20.572561    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:20.572621    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:20.582719    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:56:20.582736    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:56:20.582742    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:56:20.595148    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:20.595160    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:20.599652    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:56:20.599659    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:56:20.617791    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:56:20.617803    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:56:20.629322    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:56:20.629335    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:56:20.641231    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:56:20.641241    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:56:20.653025    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:56:20.653035    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:20.665729    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:20.665740    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:56:20.701535    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:20.701633    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:20.702312    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:56:20.702315    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:56:20.716852    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:56:20.716862    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:56:20.734295    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:56:20.734309    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:56:20.750038    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:20.750050    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:20.776361    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:20.776371    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:20.810833    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:56:20.810847    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:56:20.848234    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:56:20.848245    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:56:20.869771    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:56:20.869782    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:56:20.881204    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:20.881217    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:56:20.881243    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:56:20.881248    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:20.881252    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:20.881256    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:20.881258    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:56:30.883585    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:35.886047    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:35.886501    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:35.923611    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:56:35.923755    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:35.946546    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:56:35.946638    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:35.962505    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:56:35.962579    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:35.974798    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:56:35.974871    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:35.985688    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:56:35.985759    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:35.996869    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:56:35.996938    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:36.007246    4119 logs.go:276] 0 containers: []
	W0829 11:56:36.007257    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:36.007315    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:36.019494    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:56:36.019513    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:56:36.019520    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:56:36.058446    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:56:36.058456    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:56:36.070603    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:56:36.070615    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:56:36.085967    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:56:36.085978    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:56:36.098536    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:36.098547    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:36.136735    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:56:36.136746    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:56:36.152021    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:56:36.152033    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:56:36.171060    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:56:36.171071    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:56:36.182726    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:36.182737    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:56:36.219241    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:36.219337    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:36.219995    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:56:36.220001    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:56:36.234577    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:36.234591    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:36.259918    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:56:36.259929    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:36.271878    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:36.271892    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:36.276464    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:56:36.276473    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:56:36.287776    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:56:36.287788    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:56:36.304969    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:56:36.304980    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:56:36.316488    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:36.316501    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:56:36.316526    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:56:36.316531    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:36.316535    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:36.316539    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:36.316541    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:56:46.318772    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:51.321014    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:51.321153    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:51.336356    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:56:51.336430    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:51.352293    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:56:51.352363    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:51.363282    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:56:51.363342    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:51.373974    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:56:51.374042    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:51.384079    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:56:51.384150    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:51.395088    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:56:51.395160    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:51.405741    4119 logs.go:276] 0 containers: []
	W0829 11:56:51.405752    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:51.405805    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:51.416373    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:56:51.416392    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:56:51.416398    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:56:51.432397    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:56:51.432412    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:56:51.474296    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:56:51.474306    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:56:51.486973    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:56:51.486985    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:51.498956    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:51.498967    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:51.534630    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:51.534640    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:51.558901    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:56:51.558912    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:56:51.576348    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:56:51.576359    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:56:51.591877    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:56:51.591890    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:56:51.603157    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:56:51.603171    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:56:51.615234    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:56:51.615244    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:56:51.632365    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:56:51.632376    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:56:51.644812    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:51.644824    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:56:51.680102    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:51.680197    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:51.680858    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:51.680863    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:51.685521    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:56:51.685530    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:56:51.699359    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:56:51.699372    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:56:51.710658    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:51.710671    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:56:51.710695    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:56:51.710698    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:56:51.710702    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:56:51.710705    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:56:51.710709    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:57:01.714709    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:06.715236    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:06.715444    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:06.742177    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:57:06.742294    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:06.759830    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:57:06.759921    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:06.772831    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:57:06.772905    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:06.784762    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:57:06.784833    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:06.795088    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:57:06.795155    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:06.806424    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:57:06.806496    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:06.816864    4119 logs.go:276] 0 containers: []
	W0829 11:57:06.816875    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:06.816927    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:06.827571    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:57:06.827588    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:06.827594    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:57:06.864063    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:06.864162    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:06.864840    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:06.864846    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:06.869878    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:57:06.869890    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:57:06.909548    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:57:06.909563    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:06.922153    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:57:06.922165    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:57:06.935015    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:06.935027    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:06.959476    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:57:06.959485    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:57:06.975546    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:57:06.975556    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:57:06.987643    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:06.987657    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:07.022095    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:57:07.022109    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:57:07.036228    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:57:07.036242    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:57:07.050151    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:57:07.050166    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:57:07.068292    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:57:07.068305    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:57:07.079487    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:57:07.079500    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:57:07.091230    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:57:07.091242    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:57:07.102779    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:57:07.102790    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:57:07.120210    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:07.120220    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:57:07.120245    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:57:07.120249    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:07.120253    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:07.120256    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:07.120259    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:57:17.122911    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:22.125250    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:22.125419    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:22.148683    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:57:22.148807    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:22.171230    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:57:22.171307    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:22.183522    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:57:22.183597    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:22.193830    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:57:22.193905    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:22.209668    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:57:22.209740    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:22.220439    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:57:22.220512    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:22.230383    4119 logs.go:276] 0 containers: []
	W0829 11:57:22.230397    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:22.230458    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:22.240634    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:57:22.240653    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:57:22.240659    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:57:22.254505    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:57:22.254518    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:57:22.275418    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:57:22.275431    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:57:22.290623    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:57:22.290634    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:22.302602    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:57:22.302614    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:57:22.351058    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:57:22.351071    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:57:22.364251    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:57:22.364264    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:57:22.375897    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:57:22.375909    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:57:22.392685    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:57:22.392697    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:57:22.404362    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:22.404373    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:22.409150    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:57:22.409160    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:57:22.423315    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:57:22.423326    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:57:22.441209    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:57:22.441220    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:57:22.452806    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:22.452817    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:22.477511    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:22.477523    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:57:22.513426    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:22.513523    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:22.514225    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:22.514233    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:22.548904    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:22.548917    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:57:22.548947    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:57:22.548952    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:22.548956    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:22.548965    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:22.548968    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:57:32.553016    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:37.555324    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:37.555494    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:37.571763    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:57:37.571845    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:37.584316    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:57:37.584391    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:37.595857    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:57:37.595926    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:37.606688    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:57:37.606760    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:37.617654    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:57:37.617725    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:37.628629    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:57:37.628695    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:37.639332    4119 logs.go:276] 0 containers: []
	W0829 11:57:37.639341    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:37.639391    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:37.649694    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:57:37.649711    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:37.649717    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:57:37.686144    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:37.686238    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:37.686902    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:57:37.686908    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:57:37.728945    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:57:37.728958    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:57:37.746678    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:57:37.746690    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:57:37.758064    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:57:37.758076    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:57:37.770729    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:57:37.770739    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:57:37.782204    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:57:37.782217    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:57:37.799951    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:57:37.799963    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:57:37.811961    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:57:37.811971    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:37.825518    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:37.825530    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:37.830301    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:37.830311    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:37.867225    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:57:37.867237    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:57:37.881162    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:57:37.881173    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:57:37.892269    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:57:37.892282    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:57:37.911482    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:57:37.911495    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:57:37.928639    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:37.928650    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:37.954273    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:37.954286    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:57:37.954309    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:57:37.954313    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:37.954317    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:37.954320    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:37.954323    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:57:47.957343    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:52.959631    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:52.959809    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:52.987380    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:57:52.987509    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:53.004998    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:57:53.005085    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:53.018031    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:57:53.018110    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:53.029772    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:57:53.029841    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:53.040612    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:57:53.040685    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:53.051418    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:57:53.051482    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:53.061782    4119 logs.go:276] 0 containers: []
	W0829 11:57:53.061796    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:53.061854    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:53.072675    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:57:53.072692    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:53.072698    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:53.107584    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:57:53.107599    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:57:53.119754    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:57:53.119766    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:53.132277    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:53.132289    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:57:53.168777    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:53.168877    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:53.169579    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:53.169590    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:53.174285    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:57:53.174292    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:57:53.190144    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:53.190156    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:53.213728    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:57:53.213739    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:57:53.225334    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:57:53.225347    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:57:53.237382    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:57:53.237393    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:57:53.252020    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:57:53.252034    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:57:53.268887    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:57:53.268898    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:57:53.286104    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:57:53.286115    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:57:53.301823    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:57:53.301837    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:57:53.324219    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:57:53.324233    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:57:53.338069    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:57:53.338080    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:57:53.378796    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:53.378806    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:57:53.378832    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:57:53.378836    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:57:53.378840    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:57:53.378845    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:57:53.378848    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:58:03.382984    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:08.385680    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:08.386079    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:58:08.425511    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:58:08.425658    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:58:08.454655    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:58:08.454762    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:58:08.474754    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:58:08.474834    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:58:08.485657    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:58:08.485726    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:58:08.496177    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:58:08.496249    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:58:08.506607    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:58:08.506670    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:58:08.517192    4119 logs.go:276] 0 containers: []
	W0829 11:58:08.517206    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:58:08.517264    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:58:08.529482    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:58:08.529502    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:58:08.529508    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:58:08.544599    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:58:08.544611    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:58:08.565936    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:58:08.565947    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:58:08.578864    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:58:08.578875    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:58:08.603312    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:58:08.603324    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:58:08.639682    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:58:08.639780    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:58:08.640479    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:58:08.640487    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:58:08.652433    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:58:08.652444    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:58:08.669689    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:58:08.669700    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:58:08.681227    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:58:08.681241    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:58:08.685898    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:58:08.685904    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:58:08.703782    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:58:08.703795    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:58:08.724599    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:58:08.724609    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:58:08.736017    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:58:08.736027    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:58:08.773072    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:58:08.773083    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:58:08.811478    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:58:08.811490    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:58:08.823316    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:58:08.823327    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:58:08.836892    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:58:08.836902    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:58:08.836929    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:58:08.836935    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:58:08.836939    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:58:08.836945    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:58:08.836948    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:58:18.841021    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:23.843377    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:23.843491    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:58:23.854637    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:58:23.854708    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:58:23.865146    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:58:23.865217    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:58:23.877366    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:58:23.877428    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:58:23.888760    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:58:23.888831    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:58:23.900423    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:58:23.900492    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:58:23.911251    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:58:23.911316    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:58:23.921364    4119 logs.go:276] 0 containers: []
	W0829 11:58:23.921377    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:58:23.921438    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:58:23.932138    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:58:23.932157    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:58:23.932164    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:58:23.968001    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:58:23.968016    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:58:23.983793    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:58:23.983806    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:58:24.008410    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:58:24.008417    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:58:24.019718    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:58:24.019732    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:58:24.036738    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:58:24.036749    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:58:24.057216    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:58:24.057229    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:58:24.075512    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:58:24.075522    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:58:24.087266    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:58:24.087277    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:58:24.103236    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:58:24.103250    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:58:24.124824    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:58:24.124835    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:58:24.137426    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:58:24.137438    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:58:24.172144    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:58:24.172249    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:58:24.172929    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:58:24.172935    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:58:24.187679    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:58:24.187691    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:58:24.199318    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:58:24.199330    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:58:24.204556    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:58:24.204566    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:58:24.242031    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:58:24.242040    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:58:24.242063    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:58:24.242067    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:58:24.242070    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:58:24.242074    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:58:24.242076    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:58:34.246028    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:39.246347    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:39.246436    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:58:39.257577    4119 logs.go:276] 2 containers: [a69eed03c2d2 61d794536a77]
	I0829 11:58:39.257656    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:58:39.268556    4119 logs.go:276] 2 containers: [cdc2ea695101 16eb67c2d136]
	I0829 11:58:39.268623    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:58:39.279345    4119 logs.go:276] 1 containers: [b15e5f87cad3]
	I0829 11:58:39.279421    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:58:39.289983    4119 logs.go:276] 2 containers: [6f05875fe402 df4daf7d4b00]
	I0829 11:58:39.290049    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:58:39.300437    4119 logs.go:276] 1 containers: [3bf90f429313]
	I0829 11:58:39.300494    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:58:39.311497    4119 logs.go:276] 2 containers: [de2759481e1c 6a3aea228f8f]
	I0829 11:58:39.311571    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:58:39.326500    4119 logs.go:276] 0 containers: []
	W0829 11:58:39.326511    4119 logs.go:278] No container was found matching "kindnet"
	I0829 11:58:39.326569    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:58:39.337414    4119 logs.go:276] 1 containers: [28b50a11dc0a]
	I0829 11:58:39.337436    4119 logs.go:123] Gathering logs for container status ...
	I0829 11:58:39.337442    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:58:39.349637    4119 logs.go:123] Gathering logs for kube-apiserver [61d794536a77] ...
	I0829 11:58:39.349649    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 61d794536a77"
	I0829 11:58:39.391390    4119 logs.go:123] Gathering logs for etcd [cdc2ea695101] ...
	I0829 11:58:39.391405    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cdc2ea695101"
	I0829 11:58:39.405821    4119 logs.go:123] Gathering logs for coredns [b15e5f87cad3] ...
	I0829 11:58:39.405832    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b15e5f87cad3"
	I0829 11:58:39.417653    4119 logs.go:123] Gathering logs for kube-controller-manager [de2759481e1c] ...
	I0829 11:58:39.417665    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 de2759481e1c"
	I0829 11:58:39.435300    4119 logs.go:123] Gathering logs for storage-provisioner [28b50a11dc0a] ...
	I0829 11:58:39.435311    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28b50a11dc0a"
	I0829 11:58:39.446642    4119 logs.go:123] Gathering logs for Docker ...
	I0829 11:58:39.446653    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:58:39.470688    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 11:58:39.470698    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:58:39.506626    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:58:39.506718    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:58:39.507394    4119 logs.go:123] Gathering logs for kube-apiserver [a69eed03c2d2] ...
	I0829 11:58:39.507399    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a69eed03c2d2"
	I0829 11:58:39.524698    4119 logs.go:123] Gathering logs for kube-scheduler [6f05875fe402] ...
	I0829 11:58:39.524710    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6f05875fe402"
	I0829 11:58:39.536825    4119 logs.go:123] Gathering logs for kube-proxy [3bf90f429313] ...
	I0829 11:58:39.536837    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3bf90f429313"
	I0829 11:58:39.549336    4119 logs.go:123] Gathering logs for kube-controller-manager [6a3aea228f8f] ...
	I0829 11:58:39.549351    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6a3aea228f8f"
	I0829 11:58:39.562524    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:58:39.562540    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:58:39.597407    4119 logs.go:123] Gathering logs for etcd [16eb67c2d136] ...
	I0829 11:58:39.597418    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 16eb67c2d136"
	I0829 11:58:39.615368    4119 logs.go:123] Gathering logs for kube-scheduler [df4daf7d4b00] ...
	I0829 11:58:39.615380    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 df4daf7d4b00"
	I0829 11:58:39.630975    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 11:58:39.630987    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:58:39.635683    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:58:39.635691    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:58:39.635714    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:58:39.635719    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 11:58:39.635722    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 11:58:39.635728    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 11:58:39.635742    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:58:49.639718    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:54.641961    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:54.641994    4119 kubeadm.go:597] duration metric: took 4m7.47318725s to restartPrimaryControlPlane
	W0829 11:58:54.642028    4119 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 11:58:54.642053    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0829 11:58:55.665380    4119 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.023330125s)
	I0829 11:58:55.665445    4119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 11:58:55.670509    4119 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 11:58:55.673528    4119 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 11:58:55.676344    4119 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 11:58:55.676350    4119 kubeadm.go:157] found existing configuration files:
	
	I0829 11:58:55.676372    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/admin.conf
	I0829 11:58:55.679618    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 11:58:55.679643    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 11:58:55.682847    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/kubelet.conf
	I0829 11:58:55.685558    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 11:58:55.685582    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 11:58:55.688297    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/controller-manager.conf
	I0829 11:58:55.691542    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 11:58:55.691566    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 11:58:55.694225    4119 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/scheduler.conf
	I0829 11:58:55.696807    4119 kubeadm.go:163] "https://control-plane.minikube.internal:50346" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50346 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 11:58:55.696832    4119 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 11:58:55.699984    4119 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 11:58:55.717682    4119 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0829 11:58:55.717719    4119 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 11:58:55.765291    4119 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 11:58:55.765348    4119 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 11:58:55.765397    4119 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 11:58:55.817718    4119 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 11:58:55.821010    4119 out.go:235]   - Generating certificates and keys ...
	I0829 11:58:55.821048    4119 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 11:58:55.821077    4119 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 11:58:55.821125    4119 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 11:58:55.821159    4119 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 11:58:55.821196    4119 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 11:58:55.821223    4119 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 11:58:55.821286    4119 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 11:58:55.821319    4119 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 11:58:55.821357    4119 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 11:58:55.821396    4119 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 11:58:55.821412    4119 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 11:58:55.821462    4119 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 11:58:55.869914    4119 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 11:58:55.965758    4119 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 11:58:56.091200    4119 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 11:58:56.186960    4119 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 11:58:56.216182    4119 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 11:58:56.216522    4119 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 11:58:56.216550    4119 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 11:58:56.305705    4119 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 11:58:56.313904    4119 out.go:235]   - Booting up control plane ...
	I0829 11:58:56.313970    4119 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 11:58:56.314017    4119 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 11:58:56.314068    4119 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 11:58:56.314112    4119 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 11:58:56.314192    4119 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 11:59:00.813954    4119 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503085 seconds
	I0829 11:59:00.814114    4119 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 11:59:00.818153    4119 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 11:59:01.334532    4119 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 11:59:01.334789    4119 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-373000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 11:59:01.838856    4119 kubeadm.go:310] [bootstrap-token] Using token: vq31dy.kcihsz2rnzecgmot
	I0829 11:59:01.843096    4119 out.go:235]   - Configuring RBAC rules ...
	I0829 11:59:01.843151    4119 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 11:59:01.843198    4119 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 11:59:01.845334    4119 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 11:59:01.850806    4119 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 11:59:01.851749    4119 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 11:59:01.852708    4119 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 11:59:01.855648    4119 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 11:59:02.031227    4119 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 11:59:02.242753    4119 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 11:59:02.243303    4119 kubeadm.go:310] 
	I0829 11:59:02.243335    4119 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 11:59:02.243339    4119 kubeadm.go:310] 
	I0829 11:59:02.243391    4119 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 11:59:02.243396    4119 kubeadm.go:310] 
	I0829 11:59:02.243407    4119 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 11:59:02.243437    4119 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 11:59:02.243460    4119 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 11:59:02.243501    4119 kubeadm.go:310] 
	I0829 11:59:02.243597    4119 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 11:59:02.243611    4119 kubeadm.go:310] 
	I0829 11:59:02.243696    4119 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 11:59:02.243708    4119 kubeadm.go:310] 
	I0829 11:59:02.243736    4119 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 11:59:02.243797    4119 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 11:59:02.243840    4119 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 11:59:02.243845    4119 kubeadm.go:310] 
	I0829 11:59:02.243952    4119 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 11:59:02.244026    4119 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 11:59:02.244031    4119 kubeadm.go:310] 
	I0829 11:59:02.244066    4119 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vq31dy.kcihsz2rnzecgmot \
	I0829 11:59:02.244109    4119 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a85be241893e40b79217c6f73688d370693933870156b869b3fa902a9be4179f \
	I0829 11:59:02.244119    4119 kubeadm.go:310] 	--control-plane 
	I0829 11:59:02.244122    4119 kubeadm.go:310] 
	I0829 11:59:02.244157    4119 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 11:59:02.244160    4119 kubeadm.go:310] 
	I0829 11:59:02.244206    4119 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vq31dy.kcihsz2rnzecgmot \
	I0829 11:59:02.244297    4119 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a85be241893e40b79217c6f73688d370693933870156b869b3fa902a9be4179f 
	I0829 11:59:02.244352    4119 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 11:59:02.244360    4119 cni.go:84] Creating CNI manager for ""
	I0829 11:59:02.244367    4119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:59:02.248884    4119 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 11:59:02.254856    4119 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 11:59:02.259309    4119 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 11:59:02.264883    4119 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 11:59:02.264936    4119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:59:02.264998    4119 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-373000 minikube.k8s.io/updated_at=2024_08_29T11_59_02_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=running-upgrade-373000 minikube.k8s.io/primary=true
	I0829 11:59:02.274659    4119 ops.go:34] apiserver oom_adj: -16
	I0829 11:59:02.310743    4119 kubeadm.go:1113] duration metric: took 45.855417ms to wait for elevateKubeSystemPrivileges
	I0829 11:59:02.310763    4119 kubeadm.go:394] duration metric: took 4m15.156970791s to StartCluster
	I0829 11:59:02.310773    4119 settings.go:142] acquiring lock: {Name:mk4c43097bad4576952ccc223d0a8a031914c5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:59:02.310846    4119 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:59:02.311224    4119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/kubeconfig: {Name:mk8af293b3e18a99fbcb2b7e12f57a5251bf5686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:59:02.311425    4119 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:59:02.311436    4119 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 11:59:02.311476    4119 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-373000"
	I0829 11:59:02.311492    4119 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-373000"
	I0829 11:59:02.311492    4119 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-373000"
	W0829 11:59:02.311496    4119 addons.go:243] addon storage-provisioner should already be in state true
	I0829 11:59:02.311505    4119 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-373000"
	I0829 11:59:02.311508    4119 host.go:66] Checking if "running-upgrade-373000" exists ...
	I0829 11:59:02.311509    4119 config.go:182] Loaded profile config "running-upgrade-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0829 11:59:02.312367    4119 kapi.go:59] client config for running-upgrade-373000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/running-upgrade-373000/client.key", CAFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1025f3f80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 11:59:02.312497    4119 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-373000"
	W0829 11:59:02.312504    4119 addons.go:243] addon default-storageclass should already be in state true
	I0829 11:59:02.312511    4119 host.go:66] Checking if "running-upgrade-373000" exists ...
	I0829 11:59:02.315865    4119 out.go:177] * Verifying Kubernetes components...
	I0829 11:59:02.316163    4119 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 11:59:02.320166    4119 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 11:59:02.320173    4119 sshutil.go:53] new ssh client: &{IP:localhost Port:50289 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/running-upgrade-373000/id_rsa Username:docker}
	I0829 11:59:02.322735    4119 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:59:02.326812    4119 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:59:02.330863    4119 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:59:02.330869    4119 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 11:59:02.330876    4119 sshutil.go:53] new ssh client: &{IP:localhost Port:50289 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/running-upgrade-373000/id_rsa Username:docker}
	I0829 11:59:02.409054    4119 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:59:02.414254    4119 api_server.go:52] waiting for apiserver process to appear ...
	I0829 11:59:02.414298    4119 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:59:02.418137    4119 api_server.go:72] duration metric: took 106.704666ms to wait for apiserver process to appear ...
	I0829 11:59:02.418146    4119 api_server.go:88] waiting for apiserver healthz status ...
	I0829 11:59:02.418153    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:02.452605    4119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 11:59:02.473504    4119 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:59:02.797938    4119 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0829 11:59:02.797950    4119 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0829 11:59:07.420145    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:07.420178    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:12.420412    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:12.420446    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:17.420676    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:17.420701    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:22.421061    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:22.421108    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:27.421591    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:27.421797    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:32.422630    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:32.422679    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0829 11:59:32.799842    4119 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0829 11:59:32.803835    4119 out.go:177] * Enabled addons: storage-provisioner
	I0829 11:59:32.810856    4119 addons.go:510] duration metric: took 30.499863625s for enable addons: enabled=[storage-provisioner]
	I0829 11:59:37.423703    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:37.423761    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:42.425153    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:42.425208    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:47.426216    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:47.426237    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:52.428321    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:52.428343    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:57.430442    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:57.430480    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:02.432638    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:02.432744    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:02.443713    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:00:02.443788    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:02.454292    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:00:02.454360    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:02.465040    4119 logs.go:276] 2 containers: [fe5c1d057679 1be58859c7a2]
	I0829 12:00:02.465111    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:02.475254    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:00:02.475322    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:02.485357    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:00:02.485426    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:02.496014    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:00:02.496085    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:02.506155    4119 logs.go:276] 0 containers: []
	W0829 12:00:02.506167    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:02.506223    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:02.516759    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:00:02.516778    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:00:02.516785    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:00:02.532016    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:00:02.532028    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:00:02.543713    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:00:02.543724    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:00:02.557476    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:00:02.557486    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:00:02.571960    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:00:02.571972    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:00:02.589075    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:02.589086    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:02.613374    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:02.613383    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:02.631617    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:02.631712    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:02.648169    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:02.648177    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:02.688153    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:00:02.688163    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:00:02.702721    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:00:02.702733    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:00:02.715156    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:00:02.715167    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:00:02.727542    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:00:02.727554    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:02.739144    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:02.739155    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:02.743706    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:02.743713    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:02.743736    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:02.743741    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:02.743744    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:02.743748    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:02.743751    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:12.747738    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:17.750031    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:17.750248    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:17.775836    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:00:17.775924    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:17.791594    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:00:17.791671    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:17.809444    4119 logs.go:276] 2 containers: [fe5c1d057679 1be58859c7a2]
	I0829 12:00:17.809517    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:17.822251    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:00:17.822320    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:17.832544    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:00:17.832610    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:17.843136    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:00:17.843194    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:17.853604    4119 logs.go:276] 0 containers: []
	W0829 12:00:17.853615    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:17.853674    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:17.864034    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:00:17.864048    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:00:17.864054    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:00:17.881211    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:00:17.881224    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:00:17.893124    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:00:17.893134    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:00:17.904916    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:17.904930    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:17.942371    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:00:17.942385    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:00:17.957955    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:00:17.957968    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:00:17.976217    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:00:17.976227    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:00:17.991963    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:00:17.991974    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:00:18.005147    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:00:18.005161    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:00:18.019798    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:18.019811    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:18.037092    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:18.037185    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:18.053852    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:18.053857    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:18.058676    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:18.058683    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:18.083514    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:00:18.083526    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:18.095336    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:18.095347    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:18.095372    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:18.095377    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:18.095380    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:18.095384    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:18.095387    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:28.099355    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:33.101549    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:33.101810    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:33.125525    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:00:33.125622    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:33.141344    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:00:33.141429    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:33.153580    4119 logs.go:276] 2 containers: [fe5c1d057679 1be58859c7a2]
	I0829 12:00:33.153648    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:33.164860    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:00:33.164919    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:33.175394    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:00:33.175455    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:33.186431    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:00:33.186511    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:33.197098    4119 logs.go:276] 0 containers: []
	W0829 12:00:33.197111    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:33.197185    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:33.209028    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:00:33.209043    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:33.209049    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:33.226670    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:33.226766    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:33.243528    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:33.243539    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:33.248406    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:33.248418    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:33.286670    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:00:33.286681    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:00:33.306339    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:00:33.306349    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:00:33.320803    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:00:33.320818    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:00:33.333139    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:00:33.333152    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:00:33.350978    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:33.350989    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:33.375880    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:00:33.375893    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:00:33.387797    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:00:33.387808    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:00:33.402396    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:00:33.402408    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:00:33.414005    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:00:33.414017    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:00:33.426900    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:00:33.426912    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:33.439521    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:33.439531    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:33.439556    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:33.439561    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:33.439564    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:33.439579    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:33.439583    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:43.443567    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:48.445994    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:48.446198    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:48.462516    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:00:48.462604    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:48.475504    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:00:48.475572    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:48.487757    4119 logs.go:276] 2 containers: [fe5c1d057679 1be58859c7a2]
	I0829 12:00:48.487825    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:48.498790    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:00:48.498863    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:48.509350    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:00:48.509425    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:48.520926    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:00:48.520997    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:48.531074    4119 logs.go:276] 0 containers: []
	W0829 12:00:48.531085    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:48.531140    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:48.545322    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:00:48.545337    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:48.545343    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:48.564484    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:48.564580    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:48.580720    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:00:48.580728    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:00:48.597581    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:00:48.597594    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:00:48.609054    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:48.609065    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:48.634027    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:48.634038    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:48.639195    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:48.639204    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:48.674342    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:00:48.674357    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:00:48.689466    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:00:48.689478    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:00:48.703137    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:00:48.703147    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:00:48.714627    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:00:48.714638    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:00:48.725652    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:00:48.725663    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:00:48.749073    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:00:48.749082    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:00:48.769695    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:00:48.769705    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:48.781587    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:48.781598    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:48.781627    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:48.781634    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:00:48.781639    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:00:48.781644    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:48.781646    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:58.785641    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:03.787822    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:03.787918    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:03.798363    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:01:03.798427    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:03.809330    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:01:03.809393    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:03.820092    4119 logs.go:276] 2 containers: [fe5c1d057679 1be58859c7a2]
	I0829 12:01:03.820162    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:03.830872    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:01:03.830937    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:03.841322    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:01:03.841382    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:03.851998    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:01:03.852069    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:03.862338    4119 logs.go:276] 0 containers: []
	W0829 12:01:03.862351    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:03.862409    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:03.872795    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:01:03.872810    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:03.872816    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:03.908304    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:01:03.908317    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:01:03.923036    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:01:03.923047    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:01:03.934622    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:01:03.934633    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:01:03.946405    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:01:03.946417    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:01:03.958474    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:01:03.958487    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:03.970191    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:03.970202    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:03.988603    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:03.988699    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:04.005180    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:04.005186    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:04.009967    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:01:04.009975    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:01:04.023818    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:01:04.023831    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:01:04.038604    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:01:04.038614    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:01:04.050550    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:01:04.050561    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:01:04.068531    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:04.068545    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:04.091816    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:04.091826    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:04.091853    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:04.091857    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:04.091860    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:04.091864    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:04.091867    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:01:14.095888    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:19.096963    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:19.097205    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:19.119759    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:01:19.119866    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:19.139786    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:01:19.139866    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:19.156285    4119 logs.go:276] 4 containers: [f57ab0cf1c31 0966778e7297 fe5c1d057679 1be58859c7a2]
	I0829 12:01:19.156358    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:19.167927    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:01:19.168009    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:19.178572    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:01:19.178643    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:19.189559    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:01:19.189624    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:19.200799    4119 logs.go:276] 0 containers: []
	W0829 12:01:19.200811    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:19.200871    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:19.212078    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:01:19.212096    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:19.212101    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:19.236764    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:19.236775    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:19.272351    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:01:19.272364    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:01:19.284372    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:01:19.284388    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:01:19.299013    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:01:19.299027    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:01:19.310881    4119 logs.go:123] Gathering logs for coredns [f57ab0cf1c31] ...
	I0829 12:01:19.310895    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57ab0cf1c31"
	I0829 12:01:19.322911    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:01:19.322922    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:01:19.335019    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:01:19.335033    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:01:19.350213    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:01:19.350222    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:01:19.368095    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:01:19.368104    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:19.379763    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:19.379775    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:19.397758    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:19.397852    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:19.414492    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:19.414501    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:19.419147    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:01:19.419156    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:01:19.430835    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:01:19.430846    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:01:19.444805    4119 logs.go:123] Gathering logs for coredns [0966778e7297] ...
	I0829 12:01:19.444816    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0966778e7297"
	I0829 12:01:19.456516    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:19.456527    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:19.456556    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:19.456562    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:19.456567    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:19.456571    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:19.456574    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:01:29.460519    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:34.462698    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:34.462889    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:34.481574    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:01:34.481661    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:34.495703    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:01:34.495779    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:34.507588    4119 logs.go:276] 4 containers: [f57ab0cf1c31 0966778e7297 fe5c1d057679 1be58859c7a2]
	I0829 12:01:34.507656    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:34.518075    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:01:34.518139    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:34.528346    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:01:34.528411    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:34.538524    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:01:34.538588    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:34.549256    4119 logs.go:276] 0 containers: []
	W0829 12:01:34.549270    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:34.549329    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:34.559652    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:01:34.559667    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:01:34.559672    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:01:34.578188    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:01:34.578200    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:01:34.590039    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:34.590050    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:34.594565    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:34.594571    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:34.629325    4119 logs.go:123] Gathering logs for coredns [f57ab0cf1c31] ...
	I0829 12:01:34.629339    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57ab0cf1c31"
	I0829 12:01:34.640465    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:01:34.640479    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:01:34.652483    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:01:34.652495    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:01:34.669522    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:01:34.669533    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:01:34.683925    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:01:34.683936    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:01:34.695602    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:01:34.695615    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:01:34.711085    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:01:34.711098    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:01:34.722620    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:34.722632    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:34.741002    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:34.741100    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:34.758068    4119 logs.go:123] Gathering logs for coredns [0966778e7297] ...
	I0829 12:01:34.758080    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0966778e7297"
	I0829 12:01:34.769730    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:34.769744    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:34.794781    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:01:34.794789    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:34.807179    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:34.807193    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:34.807220    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:34.807224    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:34.807228    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:34.807231    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:34.807234    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:01:44.810755    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:49.813060    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:49.813338    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:49.842938    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:01:49.843063    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:49.861127    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:01:49.861212    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:49.875315    4119 logs.go:276] 4 containers: [f57ab0cf1c31 0966778e7297 fe5c1d057679 1be58859c7a2]
	I0829 12:01:49.875395    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:49.886954    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:01:49.887023    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:49.896853    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:01:49.896924    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:49.907424    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:01:49.907490    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:49.917313    4119 logs.go:276] 0 containers: []
	W0829 12:01:49.917323    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:49.917375    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:49.928380    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:01:49.928398    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:01:49.928404    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:01:49.940150    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:01:49.940164    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:01:49.954470    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:49.954480    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:49.989007    4119 logs.go:123] Gathering logs for coredns [f57ab0cf1c31] ...
	I0829 12:01:49.989020    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57ab0cf1c31"
	I0829 12:01:50.002297    4119 logs.go:123] Gathering logs for coredns [0966778e7297] ...
	I0829 12:01:50.002310    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0966778e7297"
	I0829 12:01:50.014709    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:01:50.014722    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:01:50.026908    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:01:50.026919    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:01:50.038309    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:50.038323    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:50.064398    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:50.064407    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:50.082667    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:50.082764    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:50.098943    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:01:50.098952    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:01:50.113589    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:01:50.113600    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:01:50.131748    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:50.131765    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:50.136056    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:01:50.136062    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:01:50.148185    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:01:50.148195    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:50.161093    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:01:50.161102    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:01:50.175573    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:50.175583    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:50.175608    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:50.175612    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:01:50.175615    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:01:50.175619    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:50.175657    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:02:00.179667    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:02:05.181960    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:02:05.182173    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:02:05.197992    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:02:05.198075    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:02:05.211211    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:02:05.211286    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:02:05.226451    4119 logs.go:276] 4 containers: [f57ab0cf1c31 0966778e7297 fe5c1d057679 1be58859c7a2]
	I0829 12:02:05.226531    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:02:05.244949    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:02:05.245015    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:02:05.255523    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:02:05.255591    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:02:05.266298    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:02:05.266364    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:02:05.276322    4119 logs.go:276] 0 containers: []
	W0829 12:02:05.276335    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:02:05.276391    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:02:05.286872    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:02:05.286892    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:02:05.286897    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:02:05.302317    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:02:05.302330    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:02:05.318965    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:02:05.318976    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:02:05.330409    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:02:05.330424    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:02:05.347053    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:02:05.347065    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:02:05.351527    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:02:05.351536    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:02:05.386699    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:02:05.386712    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:02:05.400895    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:02:05.400908    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:02:05.414004    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:02:05.414015    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:02:05.439177    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:02:05.439185    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:02:05.457615    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:05.457709    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:05.473829    4119 logs.go:123] Gathering logs for coredns [f57ab0cf1c31] ...
	I0829 12:02:05.473834    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57ab0cf1c31"
	I0829 12:02:05.485357    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:02:05.485368    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:02:05.498127    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:02:05.498139    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:02:05.513293    4119 logs.go:123] Gathering logs for coredns [0966778e7297] ...
	I0829 12:02:05.513304    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0966778e7297"
	I0829 12:02:05.525160    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:02:05.525172    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:02:05.537546    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:05.537556    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:02:05.537584    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:02:05.537588    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:05.537591    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:05.537594    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:05.537597    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:02:15.541152    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:02:20.542781    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:02:20.542899    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:02:20.555874    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:02:20.555954    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:02:20.566585    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:02:20.566658    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:02:20.577476    4119 logs.go:276] 4 containers: [f57ab0cf1c31 0966778e7297 fe5c1d057679 1be58859c7a2]
	I0829 12:02:20.577546    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:02:20.587688    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:02:20.587758    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:02:20.600324    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:02:20.600393    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:02:20.611458    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:02:20.611527    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:02:20.621503    4119 logs.go:276] 0 containers: []
	W0829 12:02:20.621514    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:02:20.621575    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:02:20.635455    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:02:20.635473    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:02:20.635479    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:02:20.653463    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:02:20.653475    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:02:20.665224    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:02:20.665234    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:02:20.700299    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:02:20.700314    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:02:20.712770    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:02:20.712785    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:02:20.725097    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:02:20.725108    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:02:20.739760    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:02:20.739770    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:02:20.759156    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:02:20.759168    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:02:20.783861    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:02:20.783869    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:02:20.796851    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:02:20.796865    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:02:20.813945    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:20.814040    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:20.830831    4119 logs.go:123] Gathering logs for coredns [f57ab0cf1c31] ...
	I0829 12:02:20.830841    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57ab0cf1c31"
	I0829 12:02:20.842572    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:02:20.842585    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:02:20.854359    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:02:20.854369    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:02:20.859270    4119 logs.go:123] Gathering logs for coredns [0966778e7297] ...
	I0829 12:02:20.859278    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0966778e7297"
	I0829 12:02:20.872484    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:02:20.872495    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:02:20.887783    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:20.887794    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:02:20.887819    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:02:20.887823    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:20.887827    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:20.887840    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:20.887846    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:02:30.891844    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:02:35.894222    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:02:35.894592    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:02:35.933705    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:02:35.933821    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:02:35.952024    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:02:35.952107    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:02:35.964147    4119 logs.go:276] 4 containers: [f57ab0cf1c31 0966778e7297 fe5c1d057679 1be58859c7a2]
	I0829 12:02:35.964225    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:02:35.974971    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:02:35.975040    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:02:35.985323    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:02:35.985389    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:02:35.996079    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:02:35.996144    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:02:36.006489    4119 logs.go:276] 0 containers: []
	W0829 12:02:36.006500    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:02:36.006551    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:02:36.017064    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:02:36.017081    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:02:36.017087    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:02:36.053457    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:02:36.053472    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:02:36.071353    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:36.071446    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:36.088053    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:02:36.088062    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:02:36.100143    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:02:36.100154    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:02:36.125170    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:02:36.125181    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:02:36.136609    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:02:36.136620    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:02:36.148308    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:02:36.148320    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:02:36.163205    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:02:36.163216    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:02:36.174755    4119 logs.go:123] Gathering logs for coredns [0966778e7297] ...
	I0829 12:02:36.174766    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0966778e7297"
	I0829 12:02:36.186583    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:02:36.186594    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:02:36.204907    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:02:36.204917    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:02:36.217015    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:02:36.217027    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:02:36.221503    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:02:36.221512    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:02:36.236374    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:02:36.236385    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:02:36.256716    4119 logs.go:123] Gathering logs for coredns [f57ab0cf1c31] ...
	I0829 12:02:36.256732    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57ab0cf1c31"
	I0829 12:02:36.268295    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:36.268312    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:02:36.268345    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:02:36.268352    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:36.268359    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:36.268363    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:36.268365    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:02:46.272436    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:02:51.275012    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:02:51.275211    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:02:51.286648    4119 logs.go:276] 1 containers: [8824431bf5eb]
	I0829 12:02:51.286720    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:02:51.297264    4119 logs.go:276] 1 containers: [d15c22a60866]
	I0829 12:02:51.297338    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:02:51.308179    4119 logs.go:276] 4 containers: [f57ab0cf1c31 0966778e7297 fe5c1d057679 1be58859c7a2]
	I0829 12:02:51.308261    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:02:51.318940    4119 logs.go:276] 1 containers: [faec0fe3d9f6]
	I0829 12:02:51.319005    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:02:51.329149    4119 logs.go:276] 1 containers: [9d9fa1bb1973]
	I0829 12:02:51.329218    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:02:51.340231    4119 logs.go:276] 1 containers: [64e680a31f40]
	I0829 12:02:51.340297    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:02:51.351145    4119 logs.go:276] 0 containers: []
	W0829 12:02:51.351156    4119 logs.go:278] No container was found matching "kindnet"
	I0829 12:02:51.351212    4119 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:02:51.366072    4119 logs.go:276] 1 containers: [d76c6c38a8c3]
	I0829 12:02:51.366088    4119 logs.go:123] Gathering logs for Docker ...
	I0829 12:02:51.366094    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:02:51.389851    4119 logs.go:123] Gathering logs for kubelet ...
	I0829 12:02:51.389860    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:02:51.408691    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:51.408786    4119 logs.go:138] Found kubelet problem: Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:51.425480    4119 logs.go:123] Gathering logs for kube-apiserver [8824431bf5eb] ...
	I0829 12:02:51.425489    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8824431bf5eb"
	I0829 12:02:51.448739    4119 logs.go:123] Gathering logs for coredns [f57ab0cf1c31] ...
	I0829 12:02:51.448759    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f57ab0cf1c31"
	I0829 12:02:51.464403    4119 logs.go:123] Gathering logs for coredns [0966778e7297] ...
	I0829 12:02:51.464415    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0966778e7297"
	I0829 12:02:51.476954    4119 logs.go:123] Gathering logs for coredns [1be58859c7a2] ...
	I0829 12:02:51.476966    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1be58859c7a2"
	I0829 12:02:51.489469    4119 logs.go:123] Gathering logs for kube-proxy [9d9fa1bb1973] ...
	I0829 12:02:51.489484    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9d9fa1bb1973"
	I0829 12:02:51.501254    4119 logs.go:123] Gathering logs for storage-provisioner [d76c6c38a8c3] ...
	I0829 12:02:51.501267    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d76c6c38a8c3"
	I0829 12:02:51.512735    4119 logs.go:123] Gathering logs for dmesg ...
	I0829 12:02:51.512745    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:02:51.517341    4119 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:02:51.517351    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:02:51.551795    4119 logs.go:123] Gathering logs for container status ...
	I0829 12:02:51.551807    4119 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:02:51.563537    4119 logs.go:123] Gathering logs for etcd [d15c22a60866] ...
	I0829 12:02:51.563552    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d15c22a60866"
	I0829 12:02:51.578674    4119 logs.go:123] Gathering logs for coredns [fe5c1d057679] ...
	I0829 12:02:51.578686    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe5c1d057679"
	I0829 12:02:51.590962    4119 logs.go:123] Gathering logs for kube-scheduler [faec0fe3d9f6] ...
	I0829 12:02:51.590972    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faec0fe3d9f6"
	I0829 12:02:51.606253    4119 logs.go:123] Gathering logs for kube-controller-manager [64e680a31f40] ...
	I0829 12:02:51.606264    4119 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e680a31f40"
	I0829 12:02:51.623946    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:51.623956    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:02:51.623982    4119 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:02:51.623986    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: W0829 18:55:05.970143    3970 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	W0829 12:02:51.623989    4119 out.go:270]   Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	  Aug 29 18:55:05 running-upgrade-373000 kubelet[3970]: E0829 18:55:05.970168    3970 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-373000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-373000' and this object
	I0829 12:02:51.623992    4119 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:51.623996    4119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:03:01.627674    4119 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:03:06.630037    4119 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:03:06.634423    4119 out.go:201] 
	W0829 12:03:06.638367    4119 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0829 12:03:06.638375    4119 out.go:270] * 
	* 
	W0829 12:03:06.638887    4119 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:03:06.647413    4119 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-373000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-08-29 12:03:06.724076 -0700 PDT m=+3511.680327626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-373000 -n running-upgrade-373000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-373000 -n running-upgrade-373000: exit status 2 (15.677202417s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-373000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-015000 sudo cat                            | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo cat                            | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo                                | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo                                | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo                                | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo cat                            | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo cat                            | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo                                | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo                                | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo                                | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo find                           | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-015000 sudo crio                           | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-015000                                     | cilium-015000             | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT | 29 Aug 24 11:52 PDT |
	| start   | -p kubernetes-upgrade-361000                         | kubernetes-upgrade-361000 | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-200000                             | offline-docker-200000     | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT | 29 Aug 24 11:52 PDT |
	| start   | -p stopped-upgrade-585000                            | minikube                  | jenkins | v1.26.0 | 29 Aug 24 11:52 PDT | 29 Aug 24 11:53 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-361000                         | kubernetes-upgrade-361000 | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT | 29 Aug 24 11:52 PDT |
	| start   | -p kubernetes-upgrade-361000                         | kubernetes-upgrade-361000 | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-361000                         | kubernetes-upgrade-361000 | jenkins | v1.33.1 | 29 Aug 24 11:52 PDT | 29 Aug 24 11:52 PDT |
	| start   | -p running-upgrade-373000                            | minikube                  | jenkins | v1.26.0 | 29 Aug 24 11:52 PDT | 29 Aug 24 11:54 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-585000 stop                          | minikube                  | jenkins | v1.26.0 | 29 Aug 24 11:53 PDT | 29 Aug 24 11:53 PDT |
	| start   | -p stopped-upgrade-585000                            | stopped-upgrade-585000    | jenkins | v1.33.1 | 29 Aug 24 11:53 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-373000                            | running-upgrade-373000    | jenkins | v1.33.1 | 29 Aug 24 11:54 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-585000                            | stopped-upgrade-585000    | jenkins | v1.33.1 | 29 Aug 24 12:03 PDT | 29 Aug 24 12:03 PDT |
	| start   | -p pause-799000 --memory=2048                        | pause-799000              | jenkins | v1.33.1 | 29 Aug 24 12:03 PDT |                     |
	|         | --install-addons=false                               |                           |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                            |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 12:03:17
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 12:03:17.346263    4664 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:03:17.346369    4664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:03:17.346371    4664 out.go:358] Setting ErrFile to fd 2...
	I0829 12:03:17.346373    4664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:03:17.346482    4664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:03:17.347559    4664 out.go:352] Setting JSON to false
	I0829 12:03:17.364139    4664 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3761,"bootTime":1724954436,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:03:17.364204    4664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:03:17.370900    4664 out.go:177] * [pause-799000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:03:17.379578    4664 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:03:17.379613    4664 notify.go:220] Checking for updates...
	I0829 12:03:17.387737    4664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:03:17.390811    4664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:03:17.393763    4664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:03:17.396745    4664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:03:17.399785    4664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:03:17.403073    4664 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:03:17.403137    4664 config.go:182] Loaded profile config "running-upgrade-373000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0829 12:03:17.403179    4664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:03:17.407791    4664 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:03:17.414726    4664 start.go:297] selected driver: qemu2
	I0829 12:03:17.414732    4664 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:03:17.414738    4664 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:03:17.417133    4664 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:03:17.419744    4664 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:03:17.422852    4664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:03:17.422873    4664 cni.go:84] Creating CNI manager for ""
	I0829 12:03:17.422881    4664 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:03:17.422884    4664 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:03:17.422906    4664 start.go:340] cluster config:
	{Name:pause-799000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:pause-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_cli
ent SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:03:17.426603    4664 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:03:17.433748    4664 out.go:177] * Starting "pause-799000" primary control-plane node in "pause-799000" cluster
	I0829 12:03:17.437732    4664 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:03:17.437745    4664 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:03:17.437751    4664 cache.go:56] Caching tarball of preloaded images
	I0829 12:03:17.437803    4664 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:03:17.437806    4664 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:03:17.437870    4664 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/pause-799000/config.json ...
	I0829 12:03:17.437878    4664 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/pause-799000/config.json: {Name:mk4af43db37197a7f82c2c3f2c8cec87c6111673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:03:17.438115    4664 start.go:360] acquireMachinesLock for pause-799000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:03:17.438155    4664 start.go:364] duration metric: took 34.959µs to acquireMachinesLock for "pause-799000"
	I0829 12:03:17.438164    4664 start.go:93] Provisioning new machine with config: &{Name:pause-799000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:pause-799000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:03:17.438186    4664 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:03:17.446728    4664 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0829 12:03:17.466974    4664 start.go:159] libmachine.API.Create for "pause-799000" (driver="qemu2")
	I0829 12:03:17.467001    4664 client.go:168] LocalClient.Create starting
	I0829 12:03:17.467069    4664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:03:17.467096    4664 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:17.467107    4664 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:17.467145    4664 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:03:17.467166    4664 main.go:141] libmachine: Decoding PEM data...
	I0829 12:03:17.467175    4664 main.go:141] libmachine: Parsing certificate...
	I0829 12:03:17.468073    4664 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:03:17.683588    4664 main.go:141] libmachine: Creating SSH key...
	I0829 12:03:17.910856    4664 main.go:141] libmachine: Creating Disk image...
	I0829 12:03:17.910863    4664 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:03:17.911115    4664 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/disk.qcow2
	I0829 12:03:17.926346    4664 main.go:141] libmachine: STDOUT: 
	I0829 12:03:17.926360    4664 main.go:141] libmachine: STDERR: 
	I0829 12:03:17.926407    4664 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/disk.qcow2 +20000M
	I0829 12:03:17.934495    4664 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:03:17.934506    4664 main.go:141] libmachine: STDERR: 
	I0829 12:03:17.934525    4664 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/disk.qcow2
	I0829 12:03:17.934527    4664 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:03:17.934539    4664 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:03:17.934560    4664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:3f:d6:28:88:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/pause-799000/disk.qcow2
	I0829 12:03:17.940117    4664 main.go:141] libmachine: STDOUT: 
	I0829 12:03:17.940132    4664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:03:17.940156    4664 client.go:171] duration metric: took 473.158083ms to LocalClient.Create
	I0829 12:03:19.942336    4664 start.go:128] duration metric: took 2.504159084s to createHost
	I0829 12:03:19.942370    4664 start.go:83] releasing machines lock for "pause-799000", held for 2.504244084s
	W0829 12:03:19.942460    4664 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:03:19.949031    4664 out.go:177] * Deleting "pause-799000" in qemu2 ...
	W0829 12:03:19.981476    4664 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:03:19.981501    4664 start.go:729] Will try again in 5 seconds ...
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-08-29 18:53:26 UTC, ends at Thu 2024-08-29 19:03:22 UTC. --
	Aug 29 19:03:01 running-upgrade-373000 dockerd[3477]: time="2024-08-29T19:03:01.940556867Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1f5b7f9ec4d1db50c3eb4a5042a515783892b64738d9b2ee4698707bb559ed78 pid=15828 runtime=io.containerd.runc.v2
	Aug 29 19:03:03 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:03Z" level=error msg="ContainerStats resp: {0x4000646580 linux}"
	Aug 29 19:03:03 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:03Z" level=error msg="ContainerStats resp: {0x400089b080 linux}"
	Aug 29 19:03:04 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:04Z" level=error msg="ContainerStats resp: {0x400099e800 linux}"
	Aug 29 19:03:05 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:05Z" level=error msg="ContainerStats resp: {0x400021d200 linux}"
	Aug 29 19:03:05 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:05Z" level=error msg="ContainerStats resp: {0x400099f7c0 linux}"
	Aug 29 19:03:05 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:05Z" level=error msg="ContainerStats resp: {0x400021d8c0 linux}"
	Aug 29 19:03:05 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:05Z" level=error msg="ContainerStats resp: {0x400021da00 linux}"
	Aug 29 19:03:05 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:05Z" level=error msg="ContainerStats resp: {0x4000806000 linux}"
	Aug 29 19:03:05 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:05Z" level=error msg="ContainerStats resp: {0x40000b86c0 linux}"
	Aug 29 19:03:05 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:05Z" level=error msg="ContainerStats resp: {0x4000806840 linux}"
	Aug 29 19:03:06 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:06Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 29 19:03:11 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:11Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 29 19:03:15 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:15Z" level=error msg="ContainerStats resp: {0x4000647d40 linux}"
	Aug 29 19:03:15 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:15Z" level=error msg="ContainerStats resp: {0x400007fec0 linux}"
	Aug 29 19:03:16 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:16Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Aug 29 19:03:16 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:16Z" level=error msg="ContainerStats resp: {0x400099f200 linux}"
	Aug 29 19:03:17 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:17Z" level=error msg="ContainerStats resp: {0x400099fbc0 linux}"
	Aug 29 19:03:17 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:17Z" level=error msg="ContainerStats resp: {0x400099fd00 linux}"
	Aug 29 19:03:17 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:17Z" level=error msg="ContainerStats resp: {0x400021c240 linux}"
	Aug 29 19:03:17 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:17Z" level=error msg="ContainerStats resp: {0x40003580c0 linux}"
	Aug 29 19:03:17 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:17Z" level=error msg="ContainerStats resp: {0x4000358600 linux}"
	Aug 29 19:03:17 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:17Z" level=error msg="ContainerStats resp: {0x4000358a00 linux}"
	Aug 29 19:03:17 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:17Z" level=error msg="ContainerStats resp: {0x4000359040 linux}"
	Aug 29 19:03:21 running-upgrade-373000 cri-dockerd[3320]: time="2024-08-29T19:03:21Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	1f5b7f9ec4d1d       edaa71f2aee88       21 seconds ago      Running             coredns                   2                   199349d55581e
	7c90d47e8e9ac       edaa71f2aee88       21 seconds ago      Running             coredns                   2                   f12d88b9d1a28
	f57ab0cf1c313       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   f12d88b9d1a28
	0966778e7297b       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   199349d55581e
	9d9fa1bb19735       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   fda7e7f4aaf65
	d76c6c38a8c3e       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   0f89ca6919a1f
	d15c22a60866a       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   2d896781e3350
	64e680a31f408       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   b343f87572cd5
	8824431bf5eb6       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   649bcbd739b56
	faec0fe3d9f65       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   887df1d48831f
	
	
	==> coredns [0966778e7297] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:45181->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:56056->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:44848->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:35335->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:41645->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:40757->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:57149->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:35377->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:55887->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1164800698547158683.1023199689740653753. HINFO: read udp 10.244.0.2:59798->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1f5b7f9ec4d1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5379057592584648203.6253300930866527400. HINFO: read udp 10.244.0.2:51499->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5379057592584648203.6253300930866527400. HINFO: read udp 10.244.0.2:47327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5379057592584648203.6253300930866527400. HINFO: read udp 10.244.0.2:49681->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5379057592584648203.6253300930866527400. HINFO: read udp 10.244.0.2:51597->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5379057592584648203.6253300930866527400. HINFO: read udp 10.244.0.2:58358->10.0.2.3:53: i/o timeout
	
	
	==> coredns [7c90d47e8e9a] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7450847298164264487.1075529928729064259. HINFO: read udp 10.244.0.3:60154->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7450847298164264487.1075529928729064259. HINFO: read udp 10.244.0.3:41751->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7450847298164264487.1075529928729064259. HINFO: read udp 10.244.0.3:33556->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7450847298164264487.1075529928729064259. HINFO: read udp 10.244.0.3:55235->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7450847298164264487.1075529928729064259. HINFO: read udp 10.244.0.3:60257->10.0.2.3:53: i/o timeout
	
	
	==> coredns [f57ab0cf1c31] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:48493->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:52015->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:33313->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:49208->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:60474->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:55091->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:43592->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:59989->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:44393->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4305925529666307671.1754773206196285020. HINFO: read udp 10.244.0.3:44172->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-373000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-373000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33
	                    minikube.k8s.io/name=running-upgrade-373000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_29T11_59_02_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 29 Aug 2024 18:58:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-373000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 29 Aug 2024 19:03:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 29 Aug 2024 18:59:02 +0000   Thu, 29 Aug 2024 18:58:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 29 Aug 2024 18:59:02 +0000   Thu, 29 Aug 2024 18:58:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 29 Aug 2024 18:59:02 +0000   Thu, 29 Aug 2024 18:58:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 29 Aug 2024 18:59:02 +0000   Thu, 29 Aug 2024 18:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-373000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 2cd81afc1f8041e4bdf665361213bbfe
	  System UUID:                2cd81afc1f8041e4bdf665361213bbfe
	  Boot ID:                    bb42150b-bba0-4d81-8fef-5bdeca68cdf3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-75d6k                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 coredns-6d4b75cb6d-x8785                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m7s
	  kube-system                 etcd-running-upgrade-373000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m22s
	  kube-system                 kube-apiserver-running-upgrade-373000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-controller-manager-running-upgrade-373000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-proxy-rh5n6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-running-upgrade-373000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m6s   kube-proxy       
	  Normal  NodeReady                4m20s  kubelet          Node running-upgrade-373000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node running-upgrade-373000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node running-upgrade-373000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node running-upgrade-373000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m20s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-373000 event: Registered Node running-upgrade-373000 in Controller
	
	
	==> dmesg <==
	[  +2.115371] systemd-fstab-generator[872]: Ignoring "noauto" for root device
	[  +0.078341] systemd-fstab-generator[883]: Ignoring "noauto" for root device
	[  +0.062028] systemd-fstab-generator[894]: Ignoring "noauto" for root device
	[  +1.094301] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.085855] systemd-fstab-generator[1044]: Ignoring "noauto" for root device
	[  +0.086861] systemd-fstab-generator[1055]: Ignoring "noauto" for root device
	[  +2.388803] systemd-fstab-generator[1287]: Ignoring "noauto" for root device
	[Aug29 18:54] systemd-fstab-generator[2004]: Ignoring "noauto" for root device
	[ +12.014017] systemd-fstab-generator[2307]: Ignoring "noauto" for root device
	[  +0.142316] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[  +0.094280] systemd-fstab-generator[2352]: Ignoring "noauto" for root device
	[  +0.094890] systemd-fstab-generator[2365]: Ignoring "noauto" for root device
	[ +21.363444] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.207721] systemd-fstab-generator[3276]: Ignoring "noauto" for root device
	[  +0.084647] systemd-fstab-generator[3288]: Ignoring "noauto" for root device
	[  +0.084798] systemd-fstab-generator[3299]: Ignoring "noauto" for root device
	[  +0.073785] systemd-fstab-generator[3313]: Ignoring "noauto" for root device
	[  +2.349808] systemd-fstab-generator[3464]: Ignoring "noauto" for root device
	[  +3.865617] systemd-fstab-generator[3835]: Ignoring "noauto" for root device
	[  +1.087375] systemd-fstab-generator[3964]: Ignoring "noauto" for root device
	[Aug29 18:55] kauditd_printk_skb: 68 callbacks suppressed
	[ +39.557286] kauditd_printk_skb: 21 callbacks suppressed
	[Aug29 18:58] systemd-fstab-generator[10341]: Ignoring "noauto" for root device
	[Aug29 18:59] systemd-fstab-generator[10938]: Ignoring "noauto" for root device
	[  +0.462932] systemd-fstab-generator[11068]: Ignoring "noauto" for root device
	
	
	==> etcd [d15c22a60866] <==
	{"level":"info","ts":"2024-08-29T18:58:57.458Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-29T18:58:57.458Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-29T18:58:57.458Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-29T18:58:57.458Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-29T18:58:57.458Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-08-29T18:58:57.458Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-08-29T18:58:57.458Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-373000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:58:58.250Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:58:58.251Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:58:58.251Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:58:58.251Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-29T18:58:58.251Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-29T18:58:58.251Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-29T18:58:58.251Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-08-29T18:58:58.252Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-29T18:58:58.252Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:03:22 up 9 min,  0 users,  load average: 0.34, 0.23, 0.10
	Linux running-upgrade-373000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [8824431bf5eb] <==
	I0829 18:58:59.506378       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0829 18:58:59.510491       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0829 18:58:59.510556       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0829 18:58:59.510998       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0829 18:58:59.511010       1 cache.go:39] Caches are synced for autoregister controller
	I0829 18:58:59.518508       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0829 18:58:59.556461       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0829 18:59:00.239309       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0829 18:59:00.413924       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0829 18:59:00.416315       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0829 18:59:00.416334       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0829 18:59:00.541216       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0829 18:59:00.554160       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0829 18:59:00.573989       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0829 18:59:00.577667       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0829 18:59:00.578082       1 controller.go:611] quota admission added evaluator for: endpoints
	I0829 18:59:00.579506       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0829 18:59:01.537490       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0829 18:59:02.021138       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0829 18:59:02.024872       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0829 18:59:02.031810       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0829 18:59:02.077641       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0829 18:59:14.693482       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0829 18:59:15.193575       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0829 18:59:15.945615       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [64e680a31f40] <==
	I0829 18:59:14.508163       1 shared_informer.go:262] Caches are synced for GC
	I0829 18:59:14.508966       1 range_allocator.go:374] Set node running-upgrade-373000 PodCIDR to [10.244.0.0/24]
	I0829 18:59:14.538070       1 shared_informer.go:262] Caches are synced for persistent volume
	I0829 18:59:14.539128       1 shared_informer.go:262] Caches are synced for crt configmap
	I0829 18:59:14.542455       1 shared_informer.go:262] Caches are synced for taint
	I0829 18:59:14.542508       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0829 18:59:14.542569       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-373000. Assuming now as a timestamp.
	I0829 18:59:14.542603       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0829 18:59:14.542580       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0829 18:59:14.542702       1 event.go:294] "Event occurred" object="running-upgrade-373000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-373000 event: Registered Node running-upgrade-373000 in Controller"
	I0829 18:59:14.587731       1 shared_informer.go:262] Caches are synced for attach detach
	I0829 18:59:14.588904       1 shared_informer.go:262] Caches are synced for daemon sets
	I0829 18:59:14.591519       1 shared_informer.go:262] Caches are synced for TTL
	I0829 18:59:14.596119       1 shared_informer.go:262] Caches are synced for resource quota
	I0829 18:59:14.618772       1 shared_informer.go:262] Caches are synced for resource quota
	I0829 18:59:14.637983       1 shared_informer.go:262] Caches are synced for endpoint
	I0829 18:59:14.639169       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0829 18:59:14.640975       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0829 18:59:14.694856       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0829 18:59:15.009850       1 shared_informer.go:262] Caches are synced for garbage collector
	I0829 18:59:15.040307       1 shared_informer.go:262] Caches are synced for garbage collector
	I0829 18:59:15.040321       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0829 18:59:15.196580       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rh5n6"
	I0829 18:59:15.394283       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-x8785"
	I0829 18:59:15.400181       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-75d6k"
	
	
	==> kube-proxy [9d9fa1bb1973] <==
	I0829 18:59:15.934724       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0829 18:59:15.934749       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0829 18:59:15.934840       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0829 18:59:15.943263       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0829 18:59:15.943274       1 server_others.go:206] "Using iptables Proxier"
	I0829 18:59:15.943333       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0829 18:59:15.943448       1 server.go:661] "Version info" version="v1.24.1"
	I0829 18:59:15.943455       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0829 18:59:15.943771       1 config.go:317] "Starting service config controller"
	I0829 18:59:15.943799       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0829 18:59:15.943827       1 config.go:226] "Starting endpoint slice config controller"
	I0829 18:59:15.943841       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0829 18:59:15.944878       1 config.go:444] "Starting node config controller"
	I0829 18:59:15.944904       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0829 18:59:16.044331       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0829 18:59:16.044342       1 shared_informer.go:262] Caches are synced for service config
	I0829 18:59:16.044934       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [faec0fe3d9f6] <==
	W0829 18:58:59.469218       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0829 18:58:59.469543       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0829 18:58:59.469236       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0829 18:58:59.469547       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0829 18:58:59.469237       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0829 18:58:59.469561       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0829 18:58:59.469250       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0829 18:58:59.469573       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0829 18:58:59.469253       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0829 18:58:59.469584       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0829 18:58:59.469260       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0829 18:58:59.469597       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0829 18:58:59.469347       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:58:59.470870       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 18:59:00.302438       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0829 18:59:00.302504       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0829 18:59:00.391110       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0829 18:59:00.391182       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0829 18:59:00.403647       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0829 18:59:00.403798       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0829 18:59:00.405625       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0829 18:59:00.405658       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0829 18:59:00.445663       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0829 18:59:00.445784       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0829 18:59:02.568502       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-08-29 18:53:26 UTC, ends at Thu 2024-08-29 19:03:23 UTC. --
	Aug 29 18:59:03 running-upgrade-373000 kubelet[10944]: E0829 18:59:03.856078   10944 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-373000\" already exists" pod="kube-system/etcd-running-upgrade-373000"
	Aug 29 18:59:04 running-upgrade-373000 kubelet[10944]: E0829 18:59:04.055782   10944 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-373000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-373000"
	Aug 29 18:59:04 running-upgrade-373000 kubelet[10944]: I0829 18:59:04.251702   10944 request.go:601] Waited for 1.149780803s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Aug 29 18:59:04 running-upgrade-373000 kubelet[10944]: E0829 18:59:04.254692   10944 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-373000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-373000"
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: I0829 18:59:14.548255   10944 topology_manager.go:200] "Topology Admit Handler"
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: I0829 18:59:14.574152   10944 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: I0829 18:59:14.574267   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/26da363a-7ef0-477b-a107-c38a01775daf-tmp\") pod \"storage-provisioner\" (UID: \"26da363a-7ef0-477b-a107-c38a01775daf\") " pod="kube-system/storage-provisioner"
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: I0829 18:59:14.574283   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tms\" (UniqueName: \"kubernetes.io/projected/26da363a-7ef0-477b-a107-c38a01775daf-kube-api-access-x8tms\") pod \"storage-provisioner\" (UID: \"26da363a-7ef0-477b-a107-c38a01775daf\") " pod="kube-system/storage-provisioner"
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: I0829 18:59:14.574612   10944 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: E0829 18:59:14.680132   10944 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: E0829 18:59:14.680153   10944 projected.go:192] Error preparing data for projected volume kube-api-access-x8tms for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Aug 29 18:59:14 running-upgrade-373000 kubelet[10944]: E0829 18:59:14.680193   10944 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/26da363a-7ef0-477b-a107-c38a01775daf-kube-api-access-x8tms podName:26da363a-7ef0-477b-a107-c38a01775daf nodeName:}" failed. No retries permitted until 2024-08-29 18:59:15.180179435 +0000 UTC m=+13.168181203 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x8tms" (UniqueName: "kubernetes.io/projected/26da363a-7ef0-477b-a107-c38a01775daf-kube-api-access-x8tms") pod "storage-provisioner" (UID: "26da363a-7ef0-477b-a107-c38a01775daf") : configmap "kube-root-ca.crt" not found
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.197441   10944 topology_manager.go:200] "Topology Admit Handler"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.385867   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ccc14a1-da6b-4431-b1a6-30f3788ca4a3-xtables-lock\") pod \"kube-proxy-rh5n6\" (UID: \"1ccc14a1-da6b-4431-b1a6-30f3788ca4a3\") " pod="kube-system/kube-proxy-rh5n6"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.385931   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1ccc14a1-da6b-4431-b1a6-30f3788ca4a3-kube-proxy\") pod \"kube-proxy-rh5n6\" (UID: \"1ccc14a1-da6b-4431-b1a6-30f3788ca4a3\") " pod="kube-system/kube-proxy-rh5n6"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.385946   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ccc14a1-da6b-4431-b1a6-30f3788ca4a3-lib-modules\") pod \"kube-proxy-rh5n6\" (UID: \"1ccc14a1-da6b-4431-b1a6-30f3788ca4a3\") " pod="kube-system/kube-proxy-rh5n6"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.385969   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq7qm\" (UniqueName: \"kubernetes.io/projected/1ccc14a1-da6b-4431-b1a6-30f3788ca4a3-kube-api-access-nq7qm\") pod \"kube-proxy-rh5n6\" (UID: \"1ccc14a1-da6b-4431-b1a6-30f3788ca4a3\") " pod="kube-system/kube-proxy-rh5n6"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.399611   10944 topology_manager.go:200] "Topology Admit Handler"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.404796   10944 topology_manager.go:200] "Topology Admit Handler"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.486682   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0bcded05-3cfb-474e-b17d-2033e54556e6-config-volume\") pod \"coredns-6d4b75cb6d-x8785\" (UID: \"0bcded05-3cfb-474e-b17d-2033e54556e6\") " pod="kube-system/coredns-6d4b75cb6d-x8785"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.486733   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/585dea07-72af-4585-aeaa-794e6508d22b-config-volume\") pod \"coredns-6d4b75cb6d-75d6k\" (UID: \"585dea07-72af-4585-aeaa-794e6508d22b\") " pod="kube-system/coredns-6d4b75cb6d-75d6k"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.486790   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfzgc\" (UniqueName: \"kubernetes.io/projected/0bcded05-3cfb-474e-b17d-2033e54556e6-kube-api-access-lfzgc\") pod \"coredns-6d4b75cb6d-x8785\" (UID: \"0bcded05-3cfb-474e-b17d-2033e54556e6\") " pod="kube-system/coredns-6d4b75cb6d-x8785"
	Aug 29 18:59:15 running-upgrade-373000 kubelet[10944]: I0829 18:59:15.486802   10944 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt6ls\" (UniqueName: \"kubernetes.io/projected/585dea07-72af-4585-aeaa-794e6508d22b-kube-api-access-pt6ls\") pod \"coredns-6d4b75cb6d-75d6k\" (UID: \"585dea07-72af-4585-aeaa-794e6508d22b\") " pod="kube-system/coredns-6d4b75cb6d-75d6k"
	Aug 29 19:03:02 running-upgrade-373000 kubelet[10944]: I0829 19:03:02.103871   10944 scope.go:110] "RemoveContainer" containerID="fe5c1d0576796ce709e57318203d4c5845ec990de1ba01eb73c0b80de6f46ce7"
	Aug 29 19:03:02 running-upgrade-373000 kubelet[10944]: I0829 19:03:02.109627   10944 scope.go:110] "RemoveContainer" containerID="1be58859c7a225abd02f5d70a7d239be556bb5c07f19d70533d09f0f5830ca71"
	
	
	==> storage-provisioner [d76c6c38a8c3] <==
	I0829 18:59:15.662637       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0829 18:59:15.666490       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0829 18:59:15.666545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0829 18:59:15.669626       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0829 18:59:15.669747       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"151a7f03-9e3b-4394-9af8-04a7f4c1950a", APIVersion:"v1", ResourceVersion:"356", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-373000_fa89b4bd-24af-4230-b2f9-30a6c62f400a became leader
	I0829 18:59:15.669765       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-373000_fa89b4bd-24af-4230-b2f9-30a6c62f400a!
	I0829 18:59:15.769985       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-373000_fa89b4bd-24af-4230-b2f9-30a6c62f400a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-373000 -n running-upgrade-373000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-373000 -n running-upgrade-373000: exit status 2 (15.710902042s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-373000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-373000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-373000
--- FAIL: TestRunningBinaryUpgrade (655.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (19.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.053163042s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-361000" primary control-plane node in "kubernetes-upgrade-361000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-361000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:52:24.235403    3994 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:52:24.235526    3994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:52:24.235529    3994 out.go:358] Setting ErrFile to fd 2...
	I0829 11:52:24.235531    3994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:52:24.235665    3994 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:52:24.236709    3994 out.go:352] Setting JSON to false
	I0829 11:52:24.252571    3994 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3108,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:52:24.252650    3994 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:52:24.257357    3994 out.go:177] * [kubernetes-upgrade-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:52:24.265292    3994 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:52:24.265343    3994 notify.go:220] Checking for updates...
	I0829 11:52:24.273262    3994 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:52:24.276340    3994 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:52:24.277751    3994 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:52:24.280313    3994 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:52:24.283340    3994 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:52:24.286677    3994 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:52:24.286745    3994 config.go:182] Loaded profile config "offline-docker-200000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:52:24.286791    3994 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:52:24.291332    3994 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 11:52:24.298370    3994 start.go:297] selected driver: qemu2
	I0829 11:52:24.298378    3994 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:52:24.298385    3994 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:52:24.300509    3994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:52:24.304339    3994 out.go:177] * Automatically selected the socket_vmnet network
	I0829 11:52:24.307418    3994 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 11:52:24.307432    3994 cni.go:84] Creating CNI manager for ""
	I0829 11:52:24.307439    3994 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 11:52:24.307465    3994 start.go:340] cluster config:
	{Name:kubernetes-upgrade-361000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-361000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:52:24.310948    3994 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:52:24.319348    3994 out.go:177] * Starting "kubernetes-upgrade-361000" primary control-plane node in "kubernetes-upgrade-361000" cluster
	I0829 11:52:24.323383    3994 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 11:52:24.323400    3994 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 11:52:24.323414    3994 cache.go:56] Caching tarball of preloaded images
	I0829 11:52:24.323492    3994 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:52:24.323498    3994 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 11:52:24.323561    3994 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/kubernetes-upgrade-361000/config.json ...
	I0829 11:52:24.323577    3994 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/kubernetes-upgrade-361000/config.json: {Name:mk1a9f71a0e852241db912c8ef02811b15d107cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:52:24.323786    3994 start.go:360] acquireMachinesLock for kubernetes-upgrade-361000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:52:24.427514    3994 start.go:364] duration metric: took 103.70725ms to acquireMachinesLock for "kubernetes-upgrade-361000"
	I0829 11:52:24.427550    3994 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-361000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-361000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:52:24.427643    3994 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:52:24.433966    3994 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 11:52:24.466031    3994 start.go:159] libmachine.API.Create for "kubernetes-upgrade-361000" (driver="qemu2")
	I0829 11:52:24.466076    3994 client.go:168] LocalClient.Create starting
	I0829 11:52:24.466193    3994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:52:24.466244    3994 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:24.466261    3994 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:24.466318    3994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:52:24.466357    3994 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:24.466374    3994 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:24.466981    3994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:52:24.630667    3994 main.go:141] libmachine: Creating SSH key...
	I0829 11:52:24.723707    3994 main.go:141] libmachine: Creating Disk image...
	I0829 11:52:24.723712    3994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:52:24.723897    3994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:24.733516    3994 main.go:141] libmachine: STDOUT: 
	I0829 11:52:24.733539    3994 main.go:141] libmachine: STDERR: 
	I0829 11:52:24.733588    3994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2 +20000M
	I0829 11:52:24.741652    3994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:52:24.741665    3994 main.go:141] libmachine: STDERR: 
	I0829 11:52:24.741682    3994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:24.741688    3994 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:52:24.741704    3994 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:52:24.741732    3994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:1d:23:d1:83:1e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:24.743332    3994 main.go:141] libmachine: STDOUT: 
	I0829 11:52:24.743346    3994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:52:24.743362    3994 client.go:171] duration metric: took 277.283125ms to LocalClient.Create
	I0829 11:52:26.745509    3994 start.go:128] duration metric: took 2.3178755s to createHost
	I0829 11:52:26.745579    3994 start.go:83] releasing machines lock for "kubernetes-upgrade-361000", held for 2.318079083s
	W0829 11:52:26.745670    3994 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:26.756888    3994 out.go:177] * Deleting "kubernetes-upgrade-361000" in qemu2 ...
	W0829 11:52:26.794169    3994 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:26.794192    3994 start.go:729] Will try again in 5 seconds ...
	I0829 11:52:31.796362    3994 start.go:360] acquireMachinesLock for kubernetes-upgrade-361000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:52:31.842622    3994 start.go:364] duration metric: took 46.159417ms to acquireMachinesLock for "kubernetes-upgrade-361000"
	I0829 11:52:31.842793    3994 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-361000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-361000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:52:31.843158    3994 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 11:52:31.859521    3994 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 11:52:31.912751    3994 start.go:159] libmachine.API.Create for "kubernetes-upgrade-361000" (driver="qemu2")
	I0829 11:52:31.912802    3994 client.go:168] LocalClient.Create starting
	I0829 11:52:31.912927    3994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 11:52:31.913001    3994 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:31.913019    3994 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:31.913083    3994 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 11:52:31.913130    3994 main.go:141] libmachine: Decoding PEM data...
	I0829 11:52:31.913140    3994 main.go:141] libmachine: Parsing certificate...
	I0829 11:52:31.913622    3994 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 11:52:32.113758    3994 main.go:141] libmachine: Creating SSH key...
	I0829 11:52:32.207151    3994 main.go:141] libmachine: Creating Disk image...
	I0829 11:52:32.207160    3994 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 11:52:32.207333    3994 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:32.216980    3994 main.go:141] libmachine: STDOUT: 
	I0829 11:52:32.216999    3994 main.go:141] libmachine: STDERR: 
	I0829 11:52:32.217048    3994 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2 +20000M
	I0829 11:52:32.225272    3994 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 11:52:32.225286    3994 main.go:141] libmachine: STDERR: 
	I0829 11:52:32.225309    3994 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:32.225315    3994 main.go:141] libmachine: Starting QEMU VM...
	I0829 11:52:32.225324    3994 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:52:32.225347    3994 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:07:0c:f4:da:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:32.226995    3994 main.go:141] libmachine: STDOUT: 
	I0829 11:52:32.227011    3994 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:52:32.227022    3994 client.go:171] duration metric: took 314.220875ms to LocalClient.Create
	I0829 11:52:34.228069    3994 start.go:128] duration metric: took 2.384907583s to createHost
	I0829 11:52:34.228137    3994 start.go:83] releasing machines lock for "kubernetes-upgrade-361000", held for 2.385519708s
	W0829 11:52:34.228379    3994 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-361000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-361000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:34.234140    3994 out.go:201] 
	W0829 11:52:34.238244    3994 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:52:34.238280    3994 out.go:270] * 
	* 
	W0829 11:52:34.239878    3994 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:52:34.249067    3994 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-361000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-361000: (3.601896209s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-361000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-361000 status --format={{.Host}}: exit status 7 (63.663041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.175358709s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-361000" primary control-plane node in "kubernetes-upgrade-361000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-361000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:52:37.959665    4045 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:52:37.959799    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:52:37.959802    4045 out.go:358] Setting ErrFile to fd 2...
	I0829 11:52:37.959804    4045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:52:37.959935    4045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:52:37.960939    4045 out.go:352] Setting JSON to false
	I0829 11:52:37.977008    4045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3121,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:52:37.977158    4045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:52:37.980658    4045 out.go:177] * [kubernetes-upgrade-361000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:52:37.987632    4045 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:52:37.987694    4045 notify.go:220] Checking for updates...
	I0829 11:52:37.994527    4045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:52:37.997445    4045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:52:38.000554    4045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:52:38.003608    4045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:52:38.004851    4045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:52:38.007820    4045 config.go:182] Loaded profile config "kubernetes-upgrade-361000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0829 11:52:38.008075    4045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:52:38.011541    4045 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:52:38.019292    4045 start.go:297] selected driver: qemu2
	I0829 11:52:38.019303    4045 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-361000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-361000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:52:38.019349    4045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:52:38.021645    4045 cni.go:84] Creating CNI manager for ""
	I0829 11:52:38.021672    4045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:52:38.021700    4045 start.go:340] cluster config:
	{Name:kubernetes-upgrade-361000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubernetes-upgrade-361000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:52:38.025053    4045 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:52:38.029470    4045 out.go:177] * Starting "kubernetes-upgrade-361000" primary control-plane node in "kubernetes-upgrade-361000" cluster
	I0829 11:52:38.037585    4045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:52:38.037599    4045 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:52:38.037609    4045 cache.go:56] Caching tarball of preloaded images
	I0829 11:52:38.037670    4045 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:52:38.037675    4045 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 11:52:38.037722    4045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/kubernetes-upgrade-361000/config.json ...
	I0829 11:52:38.038140    4045 start.go:360] acquireMachinesLock for kubernetes-upgrade-361000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:52:38.038176    4045 start.go:364] duration metric: took 29µs to acquireMachinesLock for "kubernetes-upgrade-361000"
	I0829 11:52:38.038186    4045 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:52:38.038199    4045 fix.go:54] fixHost starting: 
	I0829 11:52:38.038318    4045 fix.go:112] recreateIfNeeded on kubernetes-upgrade-361000: state=Stopped err=<nil>
	W0829 11:52:38.038326    4045 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:52:38.045558    4045 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-361000" ...
	I0829 11:52:38.049612    4045 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:52:38.049653    4045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:07:0c:f4:da:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:38.051608    4045 main.go:141] libmachine: STDOUT: 
	I0829 11:52:38.051706    4045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:52:38.051738    4045 fix.go:56] duration metric: took 13.539834ms for fixHost
	I0829 11:52:38.051743    4045 start.go:83] releasing machines lock for "kubernetes-upgrade-361000", held for 13.562875ms
	W0829 11:52:38.051750    4045 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:52:38.051779    4045 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:38.051783    4045 start.go:729] Will try again in 5 seconds ...
	I0829 11:52:43.051821    4045 start.go:360] acquireMachinesLock for kubernetes-upgrade-361000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:52:43.052011    4045 start.go:364] duration metric: took 150.459µs to acquireMachinesLock for "kubernetes-upgrade-361000"
	I0829 11:52:43.052064    4045 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:52:43.052070    4045 fix.go:54] fixHost starting: 
	I0829 11:52:43.052253    4045 fix.go:112] recreateIfNeeded on kubernetes-upgrade-361000: state=Stopped err=<nil>
	W0829 11:52:43.052260    4045 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:52:43.056524    4045 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-361000" ...
	I0829 11:52:43.064242    4045 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:52:43.064306    4045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:07:0c:f4:da:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubernetes-upgrade-361000/disk.qcow2
	I0829 11:52:43.067180    4045 main.go:141] libmachine: STDOUT: 
	I0829 11:52:43.067209    4045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 11:52:43.067234    4045 fix.go:56] duration metric: took 15.165042ms for fixHost
	I0829 11:52:43.067240    4045 start.go:83] releasing machines lock for "kubernetes-upgrade-361000", held for 15.221875ms
	W0829 11:52:43.067297    4045 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-361000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-361000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 11:52:43.080457    4045 out.go:201] 
	W0829 11:52:43.084491    4045 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 11:52:43.084504    4045 out.go:270] * 
	* 
	W0829 11:52:43.085258    4045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:52:43.095294    4045 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-361000 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-361000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-361000 version --output=json: exit status 1 (40.341666ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-361000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-08-29 11:52:43.145973 -0700 PDT m=+2888.093246585
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-361000 -n kubernetes-upgrade-361000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-361000 -n kubernetes-upgrade-361000: exit status 7 (31.823583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-361000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-361000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-361000
--- FAIL: TestKubernetesUpgrade (19.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (610.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3475370184 start -p stopped-upgrade-585000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3475370184 start -p stopped-upgrade-585000 --memory=2200 --vm-driver=qemu2 : (1m11.173933667s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3475370184 -p stopped-upgrade-585000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3475370184 -p stopped-upgrade-585000 stop: (12.103527709s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-585000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-585000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m46.665604584s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-585000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-585000" primary control-plane node in "stopped-upgrade-585000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-585000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:53:56.416124    4103 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:53:56.416246    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:53:56.416250    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 11:53:56.416253    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:53:56.416383    4103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:53:56.417414    4103 out.go:352] Setting JSON to false
	I0829 11:53:56.436052    4103 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3200,"bootTime":1724954436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:53:56.436146    4103 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:53:56.440941    4103 out.go:177] * [stopped-upgrade-585000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:53:56.447854    4103 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:53:56.447948    4103 notify.go:220] Checking for updates...
	I0829 11:53:56.455793    4103 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:53:56.458826    4103 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:53:56.459893    4103 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:53:56.462829    4103 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:53:56.465854    4103 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:53:56.469152    4103 config.go:182] Loaded profile config "stopped-upgrade-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0829 11:53:56.472821    4103 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 11:53:56.475901    4103 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:53:56.479894    4103 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:53:56.486854    4103 start.go:297] selected driver: qemu2
	I0829 11:53:56.486865    4103 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0829 11:53:56.486931    4103 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:53:56.489483    4103 cni.go:84] Creating CNI manager for ""
	I0829 11:53:56.489504    4103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:53:56.489540    4103 start.go:340] cluster config:
	{Name:stopped-upgrade-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0829 11:53:56.489596    4103 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:53:56.499913    4103 out.go:177] * Starting "stopped-upgrade-585000" primary control-plane node in "stopped-upgrade-585000" cluster
	I0829 11:53:56.505862    4103 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0829 11:53:56.505882    4103 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0829 11:53:56.505892    4103 cache.go:56] Caching tarball of preloaded images
	I0829 11:53:56.505951    4103 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 11:53:56.505957    4103 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0829 11:53:56.506008    4103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/config.json ...
	I0829 11:53:56.506371    4103 start.go:360] acquireMachinesLock for stopped-upgrade-585000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 11:53:56.506410    4103 start.go:364] duration metric: took 31.334µs to acquireMachinesLock for "stopped-upgrade-585000"
	I0829 11:53:56.506418    4103 start.go:96] Skipping create...Using existing machine configuration
	I0829 11:53:56.506422    4103 fix.go:54] fixHost starting: 
	I0829 11:53:56.506529    4103 fix.go:112] recreateIfNeeded on stopped-upgrade-585000: state=Stopped err=<nil>
	W0829 11:53:56.506537    4103 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 11:53:56.512953    4103 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-585000" ...
	I0829 11:53:56.515896    4103 qemu.go:418] Using hvf for hardware acceleration
	I0829 11:53:56.516014    4103 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.0.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/qemu.pid -nic user,model=virtio,hostfwd=tcp::50252-:22,hostfwd=tcp::50253-:2376,hostname=stopped-upgrade-585000 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/disk.qcow2
	I0829 11:53:56.560131    4103 main.go:141] libmachine: STDOUT: 
	I0829 11:53:56.560158    4103 main.go:141] libmachine: STDERR: 
	I0829 11:53:56.560163    4103 main.go:141] libmachine: Waiting for VM to start (ssh -p 50252 docker@127.0.0.1)...
	I0829 11:54:16.448582    4103 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/config.json ...
	I0829 11:54:16.449543    4103 machine.go:93] provisionDockerMachine start ...
	I0829 11:54:16.449910    4103 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:16.450489    4103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ff45a0] 0x102ff6e00 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0829 11:54:16.450505    4103 main.go:141] libmachine: About to run SSH command:
	hostname
	I0829 11:54:16.540214    4103 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0829 11:54:16.540241    4103 buildroot.go:166] provisioning hostname "stopped-upgrade-585000"
	I0829 11:54:16.540345    4103 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:16.540589    4103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ff45a0] 0x102ff6e00 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0829 11:54:16.540599    4103 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-585000 && echo "stopped-upgrade-585000" | sudo tee /etc/hostname
	I0829 11:54:16.617528    4103 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-585000
	
	I0829 11:54:16.617611    4103 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:16.617780    4103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ff45a0] 0x102ff6e00 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0829 11:54:16.617791    4103 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-585000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-585000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-585000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0829 11:54:16.685301    4103 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0829 11:54:16.685318    4103 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19531-965/.minikube CaCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19531-965/.minikube}
	I0829 11:54:16.685332    4103 buildroot.go:174] setting up certificates
	I0829 11:54:16.685337    4103 provision.go:84] configureAuth start
	I0829 11:54:16.685344    4103 provision.go:143] copyHostCerts
	I0829 11:54:16.685420    4103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem, removing ...
	I0829 11:54:16.685429    4103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem
	I0829 11:54:16.685731    4103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/ca.pem (1082 bytes)
	I0829 11:54:16.685925    4103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem, removing ...
	I0829 11:54:16.685930    4103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem
	I0829 11:54:16.685984    4103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/cert.pem (1123 bytes)
	I0829 11:54:16.686082    4103 exec_runner.go:144] found /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem, removing ...
	I0829 11:54:16.686086    4103 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem
	I0829 11:54:16.686130    4103 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19531-965/.minikube/key.pem (1675 bytes)
	I0829 11:54:16.686210    4103 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-585000 san=[127.0.0.1 localhost minikube stopped-upgrade-585000]
	I0829 11:54:16.858822    4103 provision.go:177] copyRemoteCerts
	I0829 11:54:16.858880    4103 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0829 11:54:16.858890    4103 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/id_rsa Username:docker}
	I0829 11:54:16.893365    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0829 11:54:16.900363    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0829 11:54:16.907375    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0829 11:54:16.913877    4103 provision.go:87] duration metric: took 228.538708ms to configureAuth
	I0829 11:54:16.913888    4103 buildroot.go:189] setting minikube options for container-runtime
	I0829 11:54:16.913995    4103 config.go:182] Loaded profile config "stopped-upgrade-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0829 11:54:16.914030    4103 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:16.914116    4103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ff45a0] 0x102ff6e00 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0829 11:54:16.914121    4103 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0829 11:54:16.979168    4103 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0829 11:54:16.979178    4103 buildroot.go:70] root file system type: tmpfs
	I0829 11:54:16.979239    4103 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0829 11:54:16.979287    4103 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:16.979408    4103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ff45a0] 0x102ff6e00 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0829 11:54:16.979442    4103 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0829 11:54:17.050517    4103 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0829 11:54:17.050580    4103 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:17.050700    4103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ff45a0] 0x102ff6e00 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0829 11:54:17.050712    4103 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0829 11:54:17.420755    4103 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0829 11:54:17.420768    4103 machine.go:96] duration metric: took 971.227792ms to provisionDockerMachine
	I0829 11:54:17.420784    4103 start.go:293] postStartSetup for "stopped-upgrade-585000" (driver="qemu2")
	I0829 11:54:17.420791    4103 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0829 11:54:17.420854    4103 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0829 11:54:17.420864    4103 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/id_rsa Username:docker}
	I0829 11:54:17.454818    4103 ssh_runner.go:195] Run: cat /etc/os-release
	I0829 11:54:17.456326    4103 info.go:137] Remote host: Buildroot 2021.02.12
	I0829 11:54:17.456333    4103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/addons for local assets ...
	I0829 11:54:17.456429    4103 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19531-965/.minikube/files for local assets ...
	I0829 11:54:17.456560    4103 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem -> 14182.pem in /etc/ssl/certs
	I0829 11:54:17.456686    4103 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0829 11:54:17.459701    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem --> /etc/ssl/certs/14182.pem (1708 bytes)
	I0829 11:54:17.467436    4103 start.go:296] duration metric: took 46.644416ms for postStartSetup
	I0829 11:54:17.467458    4103 fix.go:56] duration metric: took 20.961337125s for fixHost
	I0829 11:54:17.467518    4103 main.go:141] libmachine: Using SSH client type: native
	I0829 11:54:17.467637    4103 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102ff45a0] 0x102ff6e00 <nil>  [] 0s} localhost 50252 <nil> <nil>}
	I0829 11:54:17.467643    4103 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0829 11:54:17.531659    4103 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724957657.162252796
	
	I0829 11:54:17.531668    4103 fix.go:216] guest clock: 1724957657.162252796
	I0829 11:54:17.531673    4103 fix.go:229] Guest: 2024-08-29 11:54:17.162252796 -0700 PDT Remote: 2024-08-29 11:54:17.46746 -0700 PDT m=+21.078708918 (delta=-305.207204ms)
	I0829 11:54:17.531685    4103 fix.go:200] guest clock delta is within tolerance: -305.207204ms
	I0829 11:54:17.531690    4103 start.go:83] releasing machines lock for "stopped-upgrade-585000", held for 21.025578333s
	I0829 11:54:17.531760    4103 ssh_runner.go:195] Run: cat /version.json
	I0829 11:54:17.531763    4103 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0829 11:54:17.531768    4103 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/id_rsa Username:docker}
	I0829 11:54:17.531779    4103 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/id_rsa Username:docker}
	W0829 11:54:17.532365    4103 sshutil.go:64] dial failure (will retry): dial tcp [::1]:50252: connect: connection refused
	I0829 11:54:17.532393    4103 retry.go:31] will retry after 152.058159ms: dial tcp [::1]:50252: connect: connection refused
	W0829 11:54:17.719181    4103 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0829 11:54:17.719247    4103 ssh_runner.go:195] Run: systemctl --version
	I0829 11:54:17.721075    4103 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0829 11:54:17.722938    4103 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0829 11:54:17.722970    4103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0829 11:54:17.725810    4103 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0829 11:54:17.730295    4103 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0829 11:54:17.730304    4103 start.go:495] detecting cgroup driver to use...
	I0829 11:54:17.730375    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:54:17.736956    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0829 11:54:17.740450    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0829 11:54:17.743561    4103 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0829 11:54:17.743588    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0829 11:54:17.746361    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:54:17.749528    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0829 11:54:17.753054    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0829 11:54:17.756506    4103 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0829 11:54:17.760528    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0829 11:54:17.764128    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0829 11:54:17.767294    4103 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0829 11:54:17.770208    4103 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0829 11:54:17.773450    4103 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0829 11:54:17.776867    4103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:17.857114    4103 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0829 11:54:17.863700    4103 start.go:495] detecting cgroup driver to use...
	I0829 11:54:17.863759    4103 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0829 11:54:17.868972    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:54:17.874015    4103 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0829 11:54:17.884932    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0829 11:54:17.890540    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 11:54:17.895157    4103 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0829 11:54:17.933689    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0829 11:54:17.938970    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0829 11:54:17.944680    4103 ssh_runner.go:195] Run: which cri-dockerd
	I0829 11:54:17.946019    4103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0829 11:54:17.949112    4103 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0829 11:54:17.954254    4103 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0829 11:54:18.037114    4103 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0829 11:54:18.122235    4103 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0829 11:54:18.122312    4103 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0829 11:54:18.128059    4103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:18.208973    4103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 11:54:19.336158    4103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.127186667s)
	I0829 11:54:19.336213    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0829 11:54:19.341081    4103 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0829 11:54:19.347609    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:54:19.352265    4103 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0829 11:54:19.433410    4103 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0829 11:54:19.504491    4103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:19.578320    4103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0829 11:54:19.584285    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0829 11:54:19.588617    4103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:19.671434    4103 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0829 11:54:19.709679    4103 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0829 11:54:19.709754    4103 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0829 11:54:19.711718    4103 start.go:563] Will wait 60s for crictl version
	I0829 11:54:19.711771    4103 ssh_runner.go:195] Run: which crictl
	I0829 11:54:19.713311    4103 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0829 11:54:19.727898    4103 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0829 11:54:19.727962    4103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:54:19.743607    4103 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0829 11:54:19.764482    4103 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0829 11:54:19.764553    4103 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0829 11:54:19.765864    4103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 11:54:19.769603    4103 kubeadm.go:883] updating cluster {Name:stopped-upgrade-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0829 11:54:19.769653    4103 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0829 11:54:19.769693    4103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:54:19.779752    4103 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 11:54:19.779762    4103 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0829 11:54:19.779806    4103 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 11:54:19.782862    4103 ssh_runner.go:195] Run: which lz4
	I0829 11:54:19.784128    4103 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0829 11:54:19.785398    4103 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0829 11:54:19.785407    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0829 11:54:20.726435    4103 docker.go:649] duration metric: took 942.343459ms to copy over tarball
	I0829 11:54:20.726517    4103 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0829 11:54:21.906353    4103 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.179838792s)
	I0829 11:54:21.906367    4103 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0829 11:54:21.922646    4103 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0829 11:54:21.925830    4103 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0829 11:54:21.931147    4103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:22.009954    4103 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0829 11:54:24.318172    4103 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.308234042s)
	I0829 11:54:24.318278    4103 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0829 11:54:24.329654    4103 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0829 11:54:24.329664    4103 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0829 11:54:24.329669    4103 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0829 11:54:24.333474    4103 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:24.335173    4103 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:24.337070    4103 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:24.337167    4103 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:24.338632    4103 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:24.338944    4103 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:24.339617    4103 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:24.339754    4103 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:24.341072    4103 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I0829 11:54:24.341126    4103 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:24.341594    4103 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:24.342351    4103 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:24.343136    4103 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:24.343361    4103 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0829 11:54:24.343476    4103 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:24.344460    4103 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:25.361735    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:25.396459    4103 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0829 11:54:25.396506    4103 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:25.396604    4103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0829 11:54:25.414791    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:25.417090    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0829 11:54:25.419709    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:25.420335    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:25.436333    4103 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0829 11:54:25.436357    4103 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:25.436403    4103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0829 11:54:25.449750    4103 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0829 11:54:25.449769    4103 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0829 11:54:25.449774    4103 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:25.449779    4103 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:25.449819    4103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0829 11:54:25.449819    4103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0829 11:54:25.455243    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0829 11:54:25.470468    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0829 11:54:25.470481    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0829 11:54:25.555319    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0829 11:54:25.566783    4103 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0829 11:54:25.566804    4103 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0829 11:54:25.566855    4103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0829 11:54:25.573714    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:25.578861    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0829 11:54:25.578972    4103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0829 11:54:25.587598    4103 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0829 11:54:25.587620    4103 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0829 11:54:25.587598    4103 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0829 11:54:25.587644    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0829 11:54:25.587664    4103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	W0829 11:54:25.589431    4103 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0829 11:54:25.589518    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:25.595074    4103 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0829 11:54:25.595099    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0829 11:54:25.604216    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0829 11:54:25.604346    4103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	W0829 11:54:25.607380    4103 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0829 11:54:25.607508    4103 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:25.609725    4103 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0829 11:54:25.609747    4103 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:25.609783    4103 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:54:25.650620    4103 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0829 11:54:25.650636    4103 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0829 11:54:25.650658    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0829 11:54:25.650685    4103 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0829 11:54:25.650699    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 11:54:25.650702    4103 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:25.650746    4103 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0829 11:54:25.665713    4103 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0829 11:54:25.665859    4103 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0829 11:54:25.668331    4103 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0829 11:54:25.668352    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0829 11:54:25.754768    4103 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0829 11:54:25.754781    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0829 11:54:25.896499    4103 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0829 11:54:25.940219    4103 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0829 11:54:25.940234    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0829 11:54:26.087790    4103 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0829 11:54:26.087833    4103 cache_images.go:92] duration metric: took 1.758183292s to LoadCachedImages
	W0829 11:54:26.087872    4103 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0829 11:54:26.087877    4103 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0829 11:54:26.087929    4103 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-585000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0829 11:54:26.088000    4103 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0829 11:54:26.101737    4103 cni.go:84] Creating CNI manager for ""
	I0829 11:54:26.101752    4103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:54:26.101759    4103 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0829 11:54:26.101768    4103 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-585000 NodeName:stopped-upgrade-585000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0829 11:54:26.101839    4103 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-585000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0829 11:54:26.101890    4103 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0829 11:54:26.104741    4103 binaries.go:44] Found k8s binaries, skipping transfer
	I0829 11:54:26.104765    4103 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0829 11:54:26.107673    4103 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0829 11:54:26.112653    4103 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0829 11:54:26.117858    4103 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0829 11:54:26.123372    4103 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0829 11:54:26.124515    4103 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0829 11:54:26.128453    4103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:54:26.208072    4103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:54:26.215708    4103 certs.go:68] Setting up /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000 for IP: 10.0.2.15
	I0829 11:54:26.215717    4103 certs.go:194] generating shared ca certs ...
	I0829 11:54:26.215725    4103 certs.go:226] acquiring lock for ca certs: {Name:mk29df1c1b696cda1cc19a90487167bb76984cb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:26.215903    4103 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key
	I0829 11:54:26.215955    4103 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key
	I0829 11:54:26.215960    4103 certs.go:256] generating profile certs ...
	I0829 11:54:26.216049    4103 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/client.key
	I0829 11:54:26.216065    4103 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.key.5da91e5b
	I0829 11:54:26.216074    4103 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.crt.5da91e5b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0829 11:54:26.246245    4103 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.crt.5da91e5b ...
	I0829 11:54:26.246262    4103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.crt.5da91e5b: {Name:mkdd552c456715cbb42886d565cef7a64afb041e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:26.247647    4103 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.key.5da91e5b ...
	I0829 11:54:26.247659    4103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.key.5da91e5b: {Name:mk7a60d1f2cba376efcc0952d360ad85ceb6bcda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:26.247812    4103 certs.go:381] copying /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.crt.5da91e5b -> /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.crt
	I0829 11:54:26.247954    4103 certs.go:385] copying /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.key.5da91e5b -> /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.key
	I0829 11:54:26.248123    4103 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/proxy-client.key
	I0829 11:54:26.248259    4103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418.pem (1338 bytes)
	W0829 11:54:26.248288    4103 certs.go:480] ignoring /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418_empty.pem, impossibly tiny 0 bytes
	I0829 11:54:26.248293    4103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca-key.pem (1675 bytes)
	I0829 11:54:26.248313    4103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem (1082 bytes)
	I0829 11:54:26.248338    4103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem (1123 bytes)
	I0829 11:54:26.248354    4103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/certs/key.pem (1675 bytes)
	I0829 11:54:26.248392    4103 certs.go:484] found cert: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem (1708 bytes)
	I0829 11:54:26.248746    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0829 11:54:26.261184    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0829 11:54:26.268422    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0829 11:54:26.276279    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0829 11:54:26.283463    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0829 11:54:26.290091    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0829 11:54:26.297152    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0829 11:54:26.304701    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0829 11:54:26.312103    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/certs/1418.pem --> /usr/share/ca-certificates/1418.pem (1338 bytes)
	I0829 11:54:26.319042    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/ssl/certs/14182.pem --> /usr/share/ca-certificates/14182.pem (1708 bytes)
	I0829 11:54:26.325853    4103 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0829 11:54:26.333023    4103 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0829 11:54:26.338002    4103 ssh_runner.go:195] Run: openssl version
	I0829 11:54:26.339878    4103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14182.pem && ln -fs /usr/share/ca-certificates/14182.pem /etc/ssl/certs/14182.pem"
	I0829 11:54:26.342642    4103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14182.pem
	I0829 11:54:26.344032    4103 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 29 18:20 /usr/share/ca-certificates/14182.pem
	I0829 11:54:26.344053    4103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14182.pem
	I0829 11:54:26.345725    4103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14182.pem /etc/ssl/certs/3ec20f2e.0"
	I0829 11:54:26.349086    4103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0829 11:54:26.352100    4103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:54:26.353425    4103 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 29 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:54:26.353445    4103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0829 11:54:26.355152    4103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0829 11:54:26.357960    4103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1418.pem && ln -fs /usr/share/ca-certificates/1418.pem /etc/ssl/certs/1418.pem"
	I0829 11:54:26.361274    4103 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1418.pem
	I0829 11:54:26.362694    4103 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 29 18:20 /usr/share/ca-certificates/1418.pem
	I0829 11:54:26.362712    4103 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1418.pem
	I0829 11:54:26.364324    4103 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1418.pem /etc/ssl/certs/51391683.0"
	I0829 11:54:26.367193    4103 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0829 11:54:26.368505    4103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0829 11:54:26.370366    4103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0829 11:54:26.372071    4103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0829 11:54:26.374011    4103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0829 11:54:26.375826    4103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0829 11:54:26.377601    4103 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0829 11:54:26.379375    4103 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-585000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:50284 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-585000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0829 11:54:26.379452    4103 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 11:54:26.389481    4103 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0829 11:54:26.393077    4103 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0829 11:54:26.393082    4103 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0829 11:54:26.393107    4103 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0829 11:54:26.396929    4103 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0829 11:54:26.397182    4103 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-585000" does not appear in /Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:54:26.397232    4103 kubeconfig.go:62] /Users/jenkins/minikube-integration/19531-965/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-585000" cluster setting kubeconfig missing "stopped-upgrade-585000" context setting]
	I0829 11:54:26.397364    4103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/kubeconfig: {Name:mk8af293b3e18a99fbcb2b7e12f57a5251bf5686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:54:26.397784    4103 kapi.go:59] client config for stopped-upgrade-585000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/client.key", CAFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045aff80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 11:54:26.398153    4103 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0829 11:54:26.401322    4103 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-585000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0829 11:54:26.401328    4103 kubeadm.go:1160] stopping kube-system containers ...
	I0829 11:54:26.401367    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0829 11:54:26.416668    4103 docker.go:483] Stopping containers: [88b8db5c0022 b385a5444da2 0b5b5924d1b0 88ec4ec8b073 0787a7b31af4 fba8ccc4b085 8d803f38d4f8 0064a115500e 7c6d5559c6b7]
	I0829 11:54:26.416737    4103 ssh_runner.go:195] Run: docker stop 88b8db5c0022 b385a5444da2 0b5b5924d1b0 88ec4ec8b073 0787a7b31af4 fba8ccc4b085 8d803f38d4f8 0064a115500e 7c6d5559c6b7
	I0829 11:54:26.427735    4103 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0829 11:54:26.433073    4103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 11:54:26.436110    4103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 11:54:26.436115    4103 kubeadm.go:157] found existing configuration files:
	
	I0829 11:54:26.436142    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf
	I0829 11:54:26.438506    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 11:54:26.438523    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 11:54:26.441263    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf
	I0829 11:54:26.444222    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 11:54:26.444239    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 11:54:26.446864    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf
	I0829 11:54:26.449422    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 11:54:26.449439    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 11:54:26.452377    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf
	I0829 11:54:26.454822    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 11:54:26.454843    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 11:54:26.457405    4103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 11:54:26.460444    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:26.481917    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:27.028084    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:27.152370    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:27.176186    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0829 11:54:27.196228    4103 api_server.go:52] waiting for apiserver process to appear ...
	I0829 11:54:27.196315    4103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:54:27.698454    4103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:54:28.198365    4103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:54:28.203098    4103 api_server.go:72] duration metric: took 1.006883s to wait for apiserver process to appear ...
	I0829 11:54:28.203108    4103 api_server.go:88] waiting for apiserver healthz status ...
	I0829 11:54:28.203118    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:33.204887    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:33.204931    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:38.205206    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:38.205269    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:43.205837    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:43.205871    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:48.206218    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:48.206242    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:53.206729    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:53.206773    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:54:58.207510    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:54:58.207536    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:03.208469    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:03.208533    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:08.209927    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:08.209954    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:13.211458    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:13.211480    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:18.213376    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:18.213417    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:23.215688    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:23.215802    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:28.218073    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:28.218346    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:55:28.251996    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:55:28.252106    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:55:28.267874    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:55:28.267968    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:55:28.289649    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:55:28.289719    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:55:28.304437    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:55:28.304502    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:55:28.314714    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:55:28.314782    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:55:28.325455    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:55:28.325535    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:55:28.338337    4103 logs.go:276] 0 containers: []
	W0829 11:55:28.338349    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:55:28.338403    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:55:28.349886    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:55:28.349903    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:55:28.349909    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:55:28.364238    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:55:28.364247    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:55:28.377952    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:55:28.377967    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:55:28.396094    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:55:28.396106    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:55:28.407534    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:55:28.407546    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:55:28.520063    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:55:28.520076    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:55:28.560308    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:55:28.560320    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:55:28.574908    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:55:28.574919    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:55:28.600144    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:55:28.600153    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:55:28.613853    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:55:28.613865    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:55:28.618210    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:55:28.618216    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:55:28.630575    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:55:28.630589    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:55:28.651214    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:55:28.651227    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:55:28.668825    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:55:28.668837    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:55:28.680140    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:55:28.680153    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:55:28.692529    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:55:28.692540    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:55:31.229286    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:36.231419    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:36.231517    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:55:36.244532    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:55:36.244609    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:55:36.255276    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:55:36.255345    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:55:36.265822    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:55:36.265894    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:55:36.276956    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:55:36.277026    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:55:36.287277    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:55:36.287345    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:55:36.298069    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:55:36.298136    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:55:36.308279    4103 logs.go:276] 0 containers: []
	W0829 11:55:36.308290    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:55:36.308358    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:55:36.321469    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:55:36.321489    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:55:36.321494    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:55:36.335200    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:55:36.335216    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:55:36.346447    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:55:36.346460    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:55:36.372325    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:55:36.372334    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:55:36.384070    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:55:36.384081    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:55:36.388412    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:55:36.388421    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:55:36.401816    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:55:36.401831    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:55:36.412735    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:55:36.412747    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:55:36.430318    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:55:36.430332    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:55:36.446123    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:55:36.446141    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:55:36.485845    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:55:36.485856    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:55:36.550695    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:55:36.550707    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:55:36.589275    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:55:36.589291    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:55:36.603266    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:55:36.603277    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:55:36.615506    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:55:36.615518    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:55:36.627833    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:55:36.627845    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:55:39.141586    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:44.144304    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:44.144536    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:55:44.162025    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:55:44.162113    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:55:44.175739    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:55:44.175827    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:55:44.187188    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:55:44.187263    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:55:44.202230    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:55:44.202296    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:55:44.213552    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:55:44.213620    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:55:44.223961    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:55:44.224042    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:55:44.233765    4103 logs.go:276] 0 containers: []
	W0829 11:55:44.233777    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:55:44.233838    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:55:44.244341    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:55:44.244357    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:55:44.244362    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:55:44.258572    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:55:44.258583    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:55:44.274661    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:55:44.274677    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:55:44.312948    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:55:44.312957    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:55:44.327523    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:55:44.327534    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:55:44.339050    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:55:44.339060    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:55:44.351033    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:55:44.351046    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:55:44.362827    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:55:44.362836    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:55:44.376817    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:55:44.376829    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:55:44.388466    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:55:44.388479    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:55:44.425196    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:55:44.425207    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:55:44.439252    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:55:44.439263    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:55:44.485573    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:55:44.485587    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:55:44.489756    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:55:44.489763    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:55:44.507627    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:55:44.507640    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:55:44.532913    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:55:44.532921    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:55:47.047025    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:52.049253    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:52.049456    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:55:52.065776    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:55:52.065865    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:55:52.078434    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:55:52.078505    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:55:52.089195    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:55:52.089254    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:55:52.100050    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:55:52.100122    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:55:52.110818    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:55:52.110878    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:55:52.121631    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:55:52.121704    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:55:52.131721    4103 logs.go:276] 0 containers: []
	W0829 11:55:52.131733    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:55:52.131788    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:55:52.142521    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:55:52.142542    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:55:52.142548    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:55:52.180720    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:55:52.180734    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:55:52.217331    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:55:52.217342    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:55:52.231643    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:55:52.231655    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:55:52.243898    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:55:52.243912    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:55:52.248058    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:55:52.248064    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:55:52.259864    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:55:52.259877    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:55:52.274440    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:55:52.274453    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:55:52.285666    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:55:52.285677    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:55:52.296898    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:55:52.296912    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:55:52.314211    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:55:52.314223    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:55:52.336331    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:55:52.336341    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:55:52.360646    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:55:52.360655    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:55:52.399107    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:55:52.399117    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:55:52.416910    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:55:52.416921    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:55:52.428627    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:55:52.428637    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:55:54.942464    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:55:59.944776    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:55:59.945166    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:55:59.979115    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:55:59.979257    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:00.006658    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:00.006742    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:00.021475    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:00.021567    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:00.032453    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:00.032521    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:00.042819    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:00.042888    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:00.053202    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:00.053252    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:00.063295    4103 logs.go:276] 0 containers: []
	W0829 11:56:00.063306    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:00.063362    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:00.076652    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:00.076671    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:00.076678    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:00.114298    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:00.114306    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:00.150043    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:00.150056    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:00.187720    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:00.187730    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:00.201487    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:00.201500    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:00.215997    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:00.216010    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:00.220449    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:00.220456    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:00.232537    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:00.232551    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:00.246492    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:00.246503    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:00.257801    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:00.257812    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:00.275696    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:00.275710    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:00.287846    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:00.287860    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:00.303510    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:00.303521    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:00.320807    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:00.320820    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:00.339427    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:00.339439    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:00.363129    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:00.363139    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:02.877316    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:07.879621    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:07.880091    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:07.919680    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:56:07.919825    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:07.942404    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:07.942508    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:07.957440    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:07.957554    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:07.970300    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:07.970378    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:07.981016    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:07.981085    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:07.991707    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:07.991779    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:08.002125    4103 logs.go:276] 0 containers: []
	W0829 11:56:08.002137    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:08.002193    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:08.012623    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:08.012641    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:08.012647    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:08.049132    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:08.049144    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:08.063605    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:08.063617    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:08.075519    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:08.075532    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:08.088329    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:08.088340    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:08.106470    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:08.106484    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:08.122122    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:08.122134    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:08.126210    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:08.126219    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:08.140054    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:08.140066    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:08.155354    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:08.155363    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:08.167456    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:08.167467    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:08.181018    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:08.181029    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:08.194266    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:08.194277    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:08.218013    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:08.218023    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:08.256685    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:08.256695    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:08.292658    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:08.292670    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:10.813452    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:15.815643    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:15.815805    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:15.831616    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:56:15.831694    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:15.846377    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:15.846452    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:15.860946    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:15.861026    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:15.871692    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:15.871763    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:15.882562    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:15.882630    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:15.893652    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:15.893748    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:15.904074    4103 logs.go:276] 0 containers: []
	W0829 11:56:15.904091    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:15.904153    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:15.914773    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:15.914789    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:15.914794    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:15.928537    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:15.928550    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:15.943805    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:15.943821    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:15.964705    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:15.964717    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:15.991629    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:15.991642    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:16.030774    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:16.030782    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:16.070939    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:16.070949    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:16.109439    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:16.109452    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:16.123839    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:16.123849    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:16.136373    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:16.136383    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:16.147927    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:16.147939    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:16.152252    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:16.152263    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:16.166558    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:16.166569    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:16.178949    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:16.178960    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:16.189612    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:16.189624    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:16.201207    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:16.201218    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:18.716516    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:23.718893    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:23.719084    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:23.734402    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:56:23.734485    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:23.746444    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:23.746509    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:23.757671    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:23.757743    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:23.768160    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:23.768227    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:23.778786    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:23.778854    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:23.800280    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:23.800348    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:23.810006    4103 logs.go:276] 0 containers: []
	W0829 11:56:23.810023    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:23.810078    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:23.820698    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:23.820717    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:23.820722    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:23.856361    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:23.856376    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:23.898401    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:23.898413    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:23.910251    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:23.910264    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:23.921399    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:23.921411    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:23.960829    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:23.960845    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:23.965611    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:23.965618    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:23.979663    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:23.979673    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:23.994467    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:23.994478    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:24.006113    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:24.006126    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:24.029876    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:24.029886    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:24.043432    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:24.043444    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:24.055068    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:24.055079    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:24.067052    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:24.067063    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:24.084551    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:24.084562    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:24.099814    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:24.099825    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:26.614126    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:31.616682    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:31.616873    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:31.638447    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:56:31.638565    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:31.657656    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:31.657744    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:31.669715    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:31.669788    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:31.681930    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:31.682022    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:31.692641    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:31.692715    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:31.703884    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:31.703964    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:31.714112    4103 logs.go:276] 0 containers: []
	W0829 11:56:31.714124    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:31.714178    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:31.724130    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:31.724148    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:31.724153    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:31.738758    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:31.738769    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:31.750347    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:31.750357    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:31.774873    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:31.774885    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:31.809781    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:31.809792    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:31.825038    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:31.825048    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:31.836024    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:31.836035    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:31.847597    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:31.847610    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:31.865141    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:31.865152    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:31.903546    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:31.903560    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:31.908148    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:31.908155    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:31.946250    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:31.946262    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:31.957628    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:31.957639    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:31.971452    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:31.971464    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:31.983186    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:31.983196    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:31.998385    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:31.998397    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:34.512471    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:39.514836    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:39.515108    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:39.535431    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:56:39.535532    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:39.549928    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:39.550004    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:39.563266    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:39.563332    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:39.573931    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:39.574007    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:39.584076    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:39.584146    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:39.594949    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:39.595019    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:39.605114    4103 logs.go:276] 0 containers: []
	W0829 11:56:39.605125    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:39.605184    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:39.616143    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:39.616160    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:39.616166    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:39.620898    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:39.620907    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:39.634830    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:39.634842    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:39.671397    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:39.671409    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:39.710477    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:39.710490    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:39.725067    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:39.725078    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:39.736292    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:39.736305    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:39.747700    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:39.747711    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:39.759439    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:39.759450    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:39.770895    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:39.770907    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:39.793617    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:39.793627    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:39.805876    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:39.805890    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:39.844594    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:39.844606    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:39.859330    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:39.859341    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:39.873479    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:39.873491    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:39.891763    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:39.891773    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:42.405188    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:47.407854    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:47.408051    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:47.427426    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:56:47.427518    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:47.441553    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:47.441631    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:47.453383    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:47.453449    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:47.464304    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:47.464375    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:47.474652    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:47.474719    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:47.485424    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:47.485490    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:47.495586    4103 logs.go:276] 0 containers: []
	W0829 11:56:47.495597    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:47.495653    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:47.505707    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:47.505723    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:47.505728    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:47.542794    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:47.542806    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:47.577545    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:47.577560    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:47.591877    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:47.591889    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:47.604585    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:47.604597    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:47.622174    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:47.622186    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:47.633845    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:47.633857    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:47.674627    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:47.674641    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:47.689131    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:47.689142    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:47.712340    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:47.712352    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:47.723554    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:47.723567    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:47.734842    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:47.734854    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:47.748490    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:47.748503    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:47.761145    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:47.761157    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:47.765277    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:47.765284    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:47.779199    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:47.779208    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:50.292960    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:56:55.295142    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:56:55.295246    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:56:55.306261    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:56:55.306342    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:56:55.316798    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:56:55.316867    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:56:55.327856    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:56:55.327925    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:56:55.338394    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:56:55.338465    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:56:55.349138    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:56:55.349213    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:56:55.360414    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:56:55.360476    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:56:55.373972    4103 logs.go:276] 0 containers: []
	W0829 11:56:55.373983    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:56:55.374038    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:56:55.384225    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:56:55.384246    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:56:55.384253    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:56:55.399003    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:56:55.399015    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:56:55.410679    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:56:55.410692    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:56:55.445172    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:56:55.445185    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:56:55.459232    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:56:55.459244    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:56:55.500289    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:56:55.500302    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:56:55.512432    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:56:55.512447    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:56:55.537181    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:56:55.537188    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:56:55.549378    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:56:55.549390    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:56:55.586949    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:56:55.586965    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:56:55.604494    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:56:55.604504    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:56:55.619216    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:56:55.619225    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:56:55.623419    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:56:55.623428    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:56:55.635450    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:56:55.635460    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:56:55.649653    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:56:55.649665    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:56:55.662800    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:56:55.662812    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:56:58.176627    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:03.178943    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:03.179160    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:03.206909    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:03.207000    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:03.228107    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:03.228183    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:03.242773    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:03.242847    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:03.253594    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:03.253665    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:03.264410    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:03.264475    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:03.275205    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:03.275276    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:03.285468    4103 logs.go:276] 0 containers: []
	W0829 11:57:03.285482    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:03.285539    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:03.296290    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:03.296307    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:03.296313    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:03.337716    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:03.337728    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:03.352086    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:03.352096    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:03.369320    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:03.369333    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:03.380666    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:03.380679    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:03.396763    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:03.396774    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:03.436753    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:03.436764    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:03.441303    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:03.441310    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:03.478625    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:03.478636    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:03.494868    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:03.494880    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:03.509787    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:03.509799    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:03.521812    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:03.521823    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:57:03.533565    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:03.533576    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:03.551083    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:03.551094    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:03.562836    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:03.562846    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:03.586881    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:03.586891    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:06.101427    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:11.103792    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:11.104218    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:11.143400    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:11.143535    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:11.166037    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:11.166136    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:11.180808    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:11.180880    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:11.192859    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:11.192932    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:11.203955    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:11.204024    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:11.226265    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:11.226332    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:11.243620    4103 logs.go:276] 0 containers: []
	W0829 11:57:11.243657    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:11.243718    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:11.259611    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:11.259629    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:11.259636    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:11.277017    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:11.277029    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:57:11.294858    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:11.294870    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:11.313189    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:11.313201    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:11.324578    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:11.324590    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:11.348467    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:11.348474    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:11.384954    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:11.384963    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:11.398798    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:11.398811    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:11.412498    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:11.412510    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:11.424084    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:11.424097    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:11.436265    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:11.436276    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:11.450647    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:11.450662    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:11.464346    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:11.464356    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:11.498321    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:11.498333    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:11.536322    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:11.536333    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:11.548681    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:11.548692    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:14.055334    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:19.057320    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:19.057590    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:19.085831    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:19.085949    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:19.103288    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:19.103371    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:19.116719    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:19.116790    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:19.128382    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:19.128456    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:19.139078    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:19.139146    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:19.149265    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:19.149330    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:19.164492    4103 logs.go:276] 0 containers: []
	W0829 11:57:19.164503    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:19.164560    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:19.182267    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:19.182283    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:19.182290    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:19.221357    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:19.221369    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:19.235319    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:19.235328    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:19.249670    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:19.249683    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:19.289082    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:19.289095    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:57:19.304038    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:19.304048    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:19.323544    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:19.323554    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:19.358712    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:19.358723    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:19.372254    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:19.372266    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:19.384619    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:19.384631    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:19.401344    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:19.401355    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:19.412323    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:19.412335    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:19.436489    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:19.436497    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:19.440959    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:19.440967    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:19.452320    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:19.452336    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:19.463893    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:19.463904    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:21.980177    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:26.982525    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:26.982770    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:27.003103    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:27.003198    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:27.020074    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:27.020148    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:27.035063    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:27.035140    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:27.045414    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:27.045488    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:27.056101    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:27.056179    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:27.066670    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:27.066736    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:27.076603    4103 logs.go:276] 0 containers: []
	W0829 11:57:27.076615    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:27.076670    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:27.087061    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:27.087079    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:27.087084    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:27.098654    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:27.098668    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:27.113399    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:27.113413    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:27.129619    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:27.129633    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:27.142064    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:27.142076    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:57:27.157659    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:27.157669    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:27.180056    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:27.180068    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:27.192574    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:27.192587    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:27.196878    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:27.196887    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:27.233554    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:27.233568    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:27.272455    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:27.272465    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:27.286513    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:27.286526    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:27.298225    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:27.298237    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:27.315134    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:27.315145    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:27.328903    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:27.328914    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:27.367136    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:27.367146    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:29.887243    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:34.889703    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:34.890076    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:34.925949    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:34.926068    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:34.942329    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:34.942414    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:34.955584    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:34.955655    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:34.966600    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:34.966672    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:34.977294    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:34.977358    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:34.987834    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:34.987902    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:34.998584    4103 logs.go:276] 0 containers: []
	W0829 11:57:34.998596    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:34.998657    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:35.014624    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:35.014641    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:35.014647    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:35.051471    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:35.051479    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:35.090220    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:35.090230    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:35.101767    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:35.101780    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:35.115893    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:35.115903    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:57:35.127399    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:35.127411    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:35.150908    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:35.150920    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:35.162597    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:35.162607    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:35.176067    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:35.176079    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:35.210296    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:35.210310    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:35.228455    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:35.228467    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:35.242843    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:35.242853    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:35.261537    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:35.261548    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:35.265646    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:35.265654    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:35.280081    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:35.280090    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:35.291839    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:35.291850    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:37.805894    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:42.808151    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:42.808648    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:42.848527    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:42.848656    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:42.880085    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:42.880182    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:42.899159    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:42.899225    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:42.910263    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:42.910336    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:42.920442    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:42.920509    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:42.930878    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:42.930957    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:42.941030    4103 logs.go:276] 0 containers: []
	W0829 11:57:42.941041    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:42.941100    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:42.951127    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:42.951146    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:42.951161    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:42.955594    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:42.955600    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:42.979411    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:42.979422    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:43.016296    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:43.016303    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:43.030171    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:43.030182    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:43.047385    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:43.047395    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:43.059206    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:43.059215    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:43.071176    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:43.071189    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:43.082988    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:43.083002    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:57:43.094860    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:43.094872    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:43.112507    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:43.112520    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:43.149119    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:43.149130    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:43.191039    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:43.191050    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:43.205847    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:43.205859    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:43.220817    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:43.220827    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:43.232127    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:43.232139    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:45.746659    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:50.747331    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": dial tcp 10.0.2.15:8443: i/o timeout (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:50.747461    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:50.760754    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:50.760826    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:50.772629    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:50.772706    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:50.784271    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:50.784335    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:50.809141    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:50.809218    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:50.819892    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:50.819959    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:50.830583    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:50.830647    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:50.841487    4103 logs.go:276] 0 containers: []
	W0829 11:57:50.841503    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:50.841556    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:50.852253    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:50.852271    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:50.852278    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:50.857906    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:50.857918    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:50.895524    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:50.895537    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:50.909976    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:50.909987    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:50.927046    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:50.927056    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:50.938459    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:50.938469    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:50.976875    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:50.976888    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:57:50.994467    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:50.994478    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:51.007110    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:51.007122    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:51.021032    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:51.021046    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:51.034935    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:51.034949    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:51.046126    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:51.046136    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:51.070804    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:51.070825    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:51.105530    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:51.105543    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:51.118153    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:51.118164    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:51.133115    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:51.133129    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:53.646743    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:57:58.648933    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:57:58.649048    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:57:58.660177    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:57:58.660252    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:57:58.671221    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:57:58.671296    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:57:58.681827    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:57:58.681895    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:57:58.692211    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:57:58.692286    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:57:58.702451    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:57:58.702519    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:57:58.713201    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:57:58.713272    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:57:58.723714    4103 logs.go:276] 0 containers: []
	W0829 11:57:58.723725    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:57:58.723783    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:57:58.734645    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:57:58.734663    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:57:58.734668    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:57:58.772249    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:57:58.772259    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:57:58.783768    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:57:58.783777    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:57:58.797851    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:57:58.797861    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:57:58.810209    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:57:58.810220    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:57:58.821550    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:57:58.821561    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:57:58.838986    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:57:58.838996    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:57:58.853160    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:57:58.853172    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:57:58.867952    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:57:58.867964    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:57:58.882635    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:57:58.882646    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:57:58.906569    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:57:58.906579    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:57:58.910981    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:57:58.910990    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:57:58.947692    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:57:58.947704    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:57:58.986323    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:57:58.986334    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:57:59.001143    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:57:59.001156    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:57:59.013302    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:57:59.013315    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:58:01.527942    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:06.530294    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:06.530669    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:58:06.562151    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:58:06.562282    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:58:06.580676    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:58:06.580772    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:58:06.595543    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:58:06.595621    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:58:06.607558    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:58:06.607632    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:58:06.618382    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:58:06.618446    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:58:06.630333    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:58:06.630395    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:58:06.641109    4103 logs.go:276] 0 containers: []
	W0829 11:58:06.641122    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:58:06.641184    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:58:06.653510    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:58:06.653529    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:58:06.653537    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:58:06.695043    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:58:06.695058    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:58:06.710532    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:58:06.710542    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:58:06.722554    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:58:06.722564    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:58:06.733902    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:58:06.733914    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:58:06.746129    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:58:06.746138    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:58:06.784571    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:58:06.784581    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:58:06.799277    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:58:06.799288    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:58:06.814230    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:58:06.814241    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:58:06.826155    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:58:06.826166    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:58:06.843804    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:58:06.843817    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:58:06.856449    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:58:06.856461    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:58:06.878537    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:58:06.878545    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:58:06.882616    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:58:06.882623    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:58:06.896448    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:58:06.896461    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:58:06.932063    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:58:06.932075    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:58:09.446256    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:14.448963    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:14.449496    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:58:14.477612    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:58:14.477736    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:58:14.495557    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:58:14.495643    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:58:14.513784    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:58:14.513858    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:58:14.525113    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:58:14.525183    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:58:14.540211    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:58:14.540282    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:58:14.550759    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:58:14.550821    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:58:14.561180    4103 logs.go:276] 0 containers: []
	W0829 11:58:14.561190    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:58:14.561254    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:58:14.572016    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:58:14.572034    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:58:14.572040    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:58:14.609092    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:58:14.609103    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:58:14.627061    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:58:14.627072    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:58:14.638560    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:58:14.638573    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:58:14.662845    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:58:14.662858    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:58:14.702012    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:58:14.702022    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:58:14.705943    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:58:14.705951    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:58:14.735395    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:58:14.735411    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:58:14.757748    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:58:14.757760    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:58:14.780858    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:58:14.780866    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:58:14.793088    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:58:14.793099    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:58:14.810318    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:58:14.810330    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:58:14.844658    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:58:14.844670    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:58:14.865499    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:58:14.865511    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:58:14.876646    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:58:14.876659    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:58:14.889137    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:58:14.889151    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:58:17.405283    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:22.406510    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:22.406749    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:58:22.436965    4103 logs.go:276] 2 containers: [939d9c84d1e1 88ec4ec8b073]
	I0829 11:58:22.437090    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:58:22.455296    4103 logs.go:276] 2 containers: [0afeef147c8b fba8ccc4b085]
	I0829 11:58:22.455392    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:58:22.468760    4103 logs.go:276] 1 containers: [91a0219bc66f]
	I0829 11:58:22.468835    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:58:22.480351    4103 logs.go:276] 2 containers: [52c287dcd5d2 88b8db5c0022]
	I0829 11:58:22.480418    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:58:22.491917    4103 logs.go:276] 1 containers: [1d9e93bcb1b2]
	I0829 11:58:22.491985    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:58:22.502251    4103 logs.go:276] 2 containers: [6871f8a47711 0b5b5924d1b0]
	I0829 11:58:22.502318    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:58:22.512055    4103 logs.go:276] 0 containers: []
	W0829 11:58:22.512073    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:58:22.512131    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:58:22.524313    4103 logs.go:276] 1 containers: [a221e919fb63]
	I0829 11:58:22.524330    4103 logs.go:123] Gathering logs for kube-scheduler [52c287dcd5d2] ...
	I0829 11:58:22.524336    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 52c287dcd5d2"
	I0829 11:58:22.536553    4103 logs.go:123] Gathering logs for kube-apiserver [88ec4ec8b073] ...
	I0829 11:58:22.536564    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88ec4ec8b073"
	I0829 11:58:22.575271    4103 logs.go:123] Gathering logs for etcd [0afeef147c8b] ...
	I0829 11:58:22.575285    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0afeef147c8b"
	I0829 11:58:22.589751    4103 logs.go:123] Gathering logs for coredns [91a0219bc66f] ...
	I0829 11:58:22.589768    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 91a0219bc66f"
	I0829 11:58:22.601220    4103 logs.go:123] Gathering logs for kube-scheduler [88b8db5c0022] ...
	I0829 11:58:22.601232    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 88b8db5c0022"
	I0829 11:58:22.613658    4103 logs.go:123] Gathering logs for kube-proxy [1d9e93bcb1b2] ...
	I0829 11:58:22.613668    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d9e93bcb1b2"
	I0829 11:58:22.625553    4103 logs.go:123] Gathering logs for kube-controller-manager [6871f8a47711] ...
	I0829 11:58:22.625564    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6871f8a47711"
	I0829 11:58:22.642881    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:58:22.642894    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0829 11:58:22.682825    4103 logs.go:123] Gathering logs for kube-apiserver [939d9c84d1e1] ...
	I0829 11:58:22.682835    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 939d9c84d1e1"
	I0829 11:58:22.696665    4103 logs.go:123] Gathering logs for etcd [fba8ccc4b085] ...
	I0829 11:58:22.696675    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fba8ccc4b085"
	I0829 11:58:22.711014    4103 logs.go:123] Gathering logs for kube-controller-manager [0b5b5924d1b0] ...
	I0829 11:58:22.711025    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0b5b5924d1b0"
	I0829 11:58:22.724971    4103 logs.go:123] Gathering logs for storage-provisioner [a221e919fb63] ...
	I0829 11:58:22.724981    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a221e919fb63"
	I0829 11:58:22.736899    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:58:22.736914    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:58:22.741058    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:58:22.741064    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:58:22.776882    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:58:22.776896    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:58:22.801364    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:58:22.801376    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:58:25.318506    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:30.321009    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:30.321129    4103 kubeadm.go:597] duration metric: took 4m3.931552167s to restartPrimaryControlPlane
	W0829 11:58:30.321188    4103 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0829 11:58:30.321219    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0829 11:58:31.325794    4103 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.00457475s)
	I0829 11:58:31.325863    4103 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0829 11:58:31.330696    4103 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0829 11:58:31.333650    4103 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0829 11:58:31.336384    4103 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0829 11:58:31.336392    4103 kubeadm.go:157] found existing configuration files:
	
	I0829 11:58:31.336418    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf
	I0829 11:58:31.339182    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0829 11:58:31.339206    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0829 11:58:31.342428    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf
	I0829 11:58:31.345181    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0829 11:58:31.345203    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0829 11:58:31.347668    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf
	I0829 11:58:31.350820    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0829 11:58:31.350841    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0829 11:58:31.353883    4103 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf
	I0829 11:58:31.356446    4103 kubeadm.go:163] "https://control-plane.minikube.internal:50284" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:50284 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0829 11:58:31.356466    4103 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0829 11:58:31.359383    4103 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0829 11:58:31.375728    4103 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I0829 11:58:31.375759    4103 kubeadm.go:310] [preflight] Running pre-flight checks
	I0829 11:58:31.426462    4103 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0829 11:58:31.426527    4103 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0829 11:58:31.426599    4103 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0829 11:58:31.479164    4103 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0829 11:58:31.483414    4103 out.go:235]   - Generating certificates and keys ...
	I0829 11:58:31.483450    4103 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0829 11:58:31.483496    4103 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0829 11:58:31.483548    4103 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0829 11:58:31.483577    4103 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I0829 11:58:31.483610    4103 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I0829 11:58:31.483637    4103 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I0829 11:58:31.483675    4103 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I0829 11:58:31.483722    4103 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I0829 11:58:31.483755    4103 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0829 11:58:31.483796    4103 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0829 11:58:31.483814    4103 kubeadm.go:310] [certs] Using the existing "sa" key
	I0829 11:58:31.483858    4103 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0829 11:58:31.591736    4103 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0829 11:58:31.683736    4103 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0829 11:58:31.852968    4103 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0829 11:58:31.969104    4103 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0829 11:58:31.997801    4103 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0829 11:58:31.998233    4103 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0829 11:58:31.998348    4103 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0829 11:58:32.084076    4103 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0829 11:58:32.088235    4103 out.go:235]   - Booting up control plane ...
	I0829 11:58:32.088279    4103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0829 11:58:32.088314    4103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0829 11:58:32.088347    4103 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0829 11:58:32.088390    4103 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0829 11:58:32.088496    4103 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0829 11:58:37.089581    4103 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.003029 seconds
	I0829 11:58:37.089655    4103 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0829 11:58:37.094302    4103 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0829 11:58:37.615805    4103 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0829 11:58:37.616418    4103 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-585000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0829 11:58:38.119929    4103 kubeadm.go:310] [bootstrap-token] Using token: eujh42.pt1z4cijvvadi89j
	I0829 11:58:38.126552    4103 out.go:235]   - Configuring RBAC rules ...
	I0829 11:58:38.126613    4103 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0829 11:58:38.126725    4103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0829 11:58:38.134881    4103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0829 11:58:38.135837    4103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0829 11:58:38.136763    4103 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0829 11:58:38.137664    4103 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0829 11:58:38.141075    4103 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0829 11:58:38.301687    4103 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0829 11:58:38.525961    4103 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0829 11:58:38.526550    4103 kubeadm.go:310] 
	I0829 11:58:38.526585    4103 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0829 11:58:38.526593    4103 kubeadm.go:310] 
	I0829 11:58:38.526649    4103 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0829 11:58:38.526653    4103 kubeadm.go:310] 
	I0829 11:58:38.526665    4103 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0829 11:58:38.526691    4103 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0829 11:58:38.526728    4103 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0829 11:58:38.526732    4103 kubeadm.go:310] 
	I0829 11:58:38.526775    4103 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0829 11:58:38.526778    4103 kubeadm.go:310] 
	I0829 11:58:38.526809    4103 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0829 11:58:38.526817    4103 kubeadm.go:310] 
	I0829 11:58:38.526858    4103 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0829 11:58:38.526903    4103 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0829 11:58:38.526949    4103 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0829 11:58:38.526953    4103 kubeadm.go:310] 
	I0829 11:58:38.527003    4103 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0829 11:58:38.527047    4103 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0829 11:58:38.527052    4103 kubeadm.go:310] 
	I0829 11:58:38.527095    4103 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eujh42.pt1z4cijvvadi89j \
	I0829 11:58:38.527157    4103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a85be241893e40b79217c6f73688d370693933870156b869b3fa902a9be4179f \
	I0829 11:58:38.527168    4103 kubeadm.go:310] 	--control-plane 
	I0829 11:58:38.527171    4103 kubeadm.go:310] 
	I0829 11:58:38.527213    4103 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0829 11:58:38.527218    4103 kubeadm.go:310] 
	I0829 11:58:38.527269    4103 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eujh42.pt1z4cijvvadi89j \
	I0829 11:58:38.527316    4103 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a85be241893e40b79217c6f73688d370693933870156b869b3fa902a9be4179f 
	I0829 11:58:38.527499    4103 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0829 11:58:38.527574    4103 cni.go:84] Creating CNI manager for ""
	I0829 11:58:38.527584    4103 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:58:38.531217    4103 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0829 11:58:38.535057    4103 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0829 11:58:38.538092    4103 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0829 11:58:38.542688    4103 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0829 11:58:38.542749    4103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0829 11:58:38.542750    4103 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-585000 minikube.k8s.io/updated_at=2024_08_29T11_58_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=95341f0b655cea8be5ebfc6bf112c8367dc08d33 minikube.k8s.io/name=stopped-upgrade-585000 minikube.k8s.io/primary=true
	I0829 11:58:38.581943    4103 kubeadm.go:1113] duration metric: took 39.2295ms to wait for elevateKubeSystemPrivileges
	I0829 11:58:38.581952    4103 ops.go:34] apiserver oom_adj: -16
	I0829 11:58:38.581963    4103 kubeadm.go:394] duration metric: took 4m12.206221334s to StartCluster
	I0829 11:58:38.581974    4103 settings.go:142] acquiring lock: {Name:mk4c43097bad4576952ccc223d0a8a031914c5a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:58:38.582060    4103 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:58:38.582449    4103 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/kubeconfig: {Name:mk8af293b3e18a99fbcb2b7e12f57a5251bf5686 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:58:38.582664    4103 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 11:58:38.582719    4103 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0829 11:58:38.582757    4103 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-585000"
	I0829 11:58:38.582770    4103 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-585000"
	W0829 11:58:38.582773    4103 addons.go:243] addon storage-provisioner should already be in state true
	I0829 11:58:38.582796    4103 host.go:66] Checking if "stopped-upgrade-585000" exists ...
	I0829 11:58:38.582780    4103 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-585000"
	I0829 11:58:38.582845    4103 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-585000"
	I0829 11:58:38.582850    4103 config.go:182] Loaded profile config "stopped-upgrade-585000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0829 11:58:38.583302    4103 retry.go:31] will retry after 1.151278012s: connect: dial unix /Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/monitor: connect: connection refused
	I0829 11:58:38.587199    4103 out.go:177] * Verifying Kubernetes components...
	I0829 11:58:38.594192    4103 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 11:58:38.598093    4103 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0829 11:58:38.601250    4103 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:58:38.601267    4103 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0829 11:58:38.601283    4103 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/id_rsa Username:docker}
	I0829 11:58:38.686311    4103 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0829 11:58:38.691928    4103 api_server.go:52] waiting for apiserver process to appear ...
	I0829 11:58:38.691978    4103 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0829 11:58:38.696005    4103 api_server.go:72] duration metric: took 113.331083ms to wait for apiserver process to appear ...
	I0829 11:58:38.696012    4103 api_server.go:88] waiting for apiserver healthz status ...
	I0829 11:58:38.696019    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:38.762839    4103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0829 11:58:39.737640    4103 kapi.go:59] client config for stopped-upgrade-585000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/profiles/stopped-upgrade-585000/client.key", CAFile:"/Users/jenkins/minikube-integration/19531-965/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8
(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1045aff80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0829 11:58:39.737772    4103 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-585000"
	W0829 11:58:39.737780    4103 addons.go:243] addon default-storageclass should already be in state true
	I0829 11:58:39.737791    4103 host.go:66] Checking if "stopped-upgrade-585000" exists ...
	I0829 11:58:39.738358    4103 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0829 11:58:39.738364    4103 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0829 11:58:39.738369    4103 sshutil.go:53] new ssh client: &{IP:localhost Port:50252 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/stopped-upgrade-585000/id_rsa Username:docker}
	I0829 11:58:39.776154    4103 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0829 11:58:39.832798    4103 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0829 11:58:39.832810    4103 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0829 11:58:43.698107    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:43.698150    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:48.698473    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:48.698520    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:53.698864    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:53.698905    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:58:58.699324    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:58:58.699351    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:03.699913    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:03.699958    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:08.700790    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:08.700828    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0829 11:59:09.833511    4103 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0829 11:59:09.837579    4103 out.go:177] * Enabled addons: storage-provisioner
	I0829 11:59:09.846431    4103 addons.go:510] duration metric: took 31.264167125s for enable addons: enabled=[storage-provisioner]
	I0829 11:59:13.701859    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:13.701881    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:18.703303    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:18.703347    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:23.705022    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:23.705063    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:28.707246    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:28.707268    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:33.709386    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:33.709424    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:38.711617    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:38.711761    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:59:38.723241    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 11:59:38.723314    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:59:38.734320    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 11:59:38.734395    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:59:38.744857    4103 logs.go:276] 2 containers: [7e0d35fd301c b0e67f216cc7]
	I0829 11:59:38.744921    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:59:38.755009    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 11:59:38.755083    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:59:38.765967    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 11:59:38.766036    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:59:38.776405    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 11:59:38.776482    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:59:38.787135    4103 logs.go:276] 0 containers: []
	W0829 11:59:38.787147    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:59:38.787204    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:59:38.797152    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 11:59:38.797165    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:59:38.797171    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:59:38.801683    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:59:38.801690    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:59:38.839804    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 11:59:38.839818    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 11:59:38.854147    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 11:59:38.854159    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 11:59:38.866292    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 11:59:38.866303    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 11:59:38.877846    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:59:38.877860    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:59:38.902721    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:59:38.902730    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:59:38.913889    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:59:38.913902    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:59:38.951218    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 11:59:38.951312    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 11:59:38.953092    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 11:59:38.953102    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 11:59:38.967425    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 11:59:38.967436    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 11:59:38.978913    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 11:59:38.978926    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 11:59:38.993498    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 11:59:38.993508    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 11:59:39.005331    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 11:59:39.005341    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 11:59:39.023070    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 11:59:39.023086    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:59:39.023111    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:59:39.023115    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 11:59:39.023118    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 11:59:39.023122    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 11:59:39.023126    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:59:49.027124    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 11:59:54.029505    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 11:59:54.029701    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 11:59:54.051608    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 11:59:54.051711    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 11:59:54.066083    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 11:59:54.066159    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 11:59:54.078080    4103 logs.go:276] 2 containers: [7e0d35fd301c b0e67f216cc7]
	I0829 11:59:54.078151    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 11:59:54.088900    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 11:59:54.088966    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 11:59:54.099467    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 11:59:54.099536    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 11:59:54.109551    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 11:59:54.109618    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 11:59:54.119909    4103 logs.go:276] 0 containers: []
	W0829 11:59:54.119921    4103 logs.go:278] No container was found matching "kindnet"
	I0829 11:59:54.119979    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 11:59:54.131616    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 11:59:54.131637    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 11:59:54.131642    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 11:59:54.142985    4103 logs.go:123] Gathering logs for Docker ...
	I0829 11:59:54.142998    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 11:59:54.167326    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 11:59:54.167334    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 11:59:54.201831    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 11:59:54.201926    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 11:59:54.203803    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 11:59:54.203813    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 11:59:54.217799    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 11:59:54.217812    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 11:59:54.229371    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 11:59:54.229381    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 11:59:54.243672    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 11:59:54.243683    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 11:59:54.260976    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 11:59:54.260985    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 11:59:54.272236    4103 logs.go:123] Gathering logs for container status ...
	I0829 11:59:54.272250    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 11:59:54.284196    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 11:59:54.284207    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 11:59:54.288350    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 11:59:54.288356    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 11:59:54.326747    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 11:59:54.326758    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 11:59:54.340526    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 11:59:54.340539    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 11:59:54.352666    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 11:59:54.352677    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 11:59:54.352707    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 11:59:54.352712    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 11:59:54.352716    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 11:59:54.352743    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 11:59:54.352746    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:04.353858    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:09.356086    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:09.356374    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:09.384226    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:00:09.384359    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:09.402569    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:00:09.402661    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:09.416249    4103 logs.go:276] 2 containers: [7e0d35fd301c b0e67f216cc7]
	I0829 12:00:09.416326    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:09.427893    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:00:09.427963    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:09.438707    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:00:09.438776    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:09.454601    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:00:09.454695    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:09.464783    4103 logs.go:276] 0 containers: []
	W0829 12:00:09.464795    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:09.464849    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:09.474944    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:00:09.474957    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:00:09.474963    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:09.486382    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:09.486397    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:09.522293    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:09.522385    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:09.524252    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:00:09.524258    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:00:09.538833    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:00:09.538844    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:00:09.551313    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:00:09.551324    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:00:09.563224    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:00:09.563235    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:00:09.578400    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:00:09.578411    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:00:09.596337    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:00:09.596347    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:00:09.607737    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:09.607747    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:09.632437    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:09.632447    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:09.636589    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:09.636599    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:09.670956    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:00:09.670966    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:00:09.692427    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:00:09.692437    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:00:09.710680    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:09.710691    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:09.710717    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:09.710747    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:09.710752    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:09.710756    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:09.710758    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:19.714747    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:24.717113    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:24.717348    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:24.744024    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:00:24.744111    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:24.757849    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:00:24.757922    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:24.769355    4103 logs.go:276] 2 containers: [7e0d35fd301c b0e67f216cc7]
	I0829 12:00:24.769428    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:24.788489    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:00:24.788562    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:24.798514    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:00:24.798588    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:24.809010    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:00:24.809081    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:24.819164    4103 logs.go:276] 0 containers: []
	W0829 12:00:24.819176    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:24.819233    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:24.829460    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:00:24.829475    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:24.829480    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:24.865094    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:24.865186    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:24.866953    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:24.866958    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:24.871298    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:00:24.871307    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:00:24.886959    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:00:24.886972    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:00:24.905035    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:00:24.905045    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:00:24.916505    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:00:24.916515    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:00:24.934528    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:00:24.934539    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:00:24.950033    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:24.950044    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:24.984005    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:00:24.984017    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:00:24.998043    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:00:24.998054    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:00:25.009534    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:00:25.009547    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:00:25.030289    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:25.030303    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:25.055198    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:00:25.055207    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:25.066866    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:25.066879    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:25.066905    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:25.066909    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:25.066913    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:25.066916    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:25.066919    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:35.070892    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:40.073135    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:40.073259    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:40.085615    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:00:40.085682    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:40.098279    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:00:40.098350    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:40.109402    4103 logs.go:276] 2 containers: [7e0d35fd301c b0e67f216cc7]
	I0829 12:00:40.109464    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:40.119937    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:00:40.120001    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:40.130268    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:00:40.130346    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:40.141129    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:00:40.141193    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:40.151295    4103 logs.go:276] 0 containers: []
	W0829 12:00:40.151313    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:40.151376    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:40.161615    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:00:40.161630    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:00:40.161635    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:00:40.173280    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:40.173291    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:40.177908    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:00:40.177915    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:00:40.189581    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:00:40.189592    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:00:40.202925    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:00:40.202937    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:00:40.216959    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:00:40.216969    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:00:40.228988    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:00:40.228999    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:00:40.244342    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:00:40.244354    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:00:40.261756    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:40.261766    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:40.284908    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:40.284916    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:40.320139    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:40.320233    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:40.322102    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:40.322110    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:40.361806    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:00:40.361822    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:00:40.377964    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:00:40.377978    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:40.394822    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:40.394832    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:40.394858    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:40.394863    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:40.394866    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:40.394869    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:40.394872    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:00:50.397224    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:00:55.399540    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:00:55.400002    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:00:55.438997    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:00:55.439135    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:00:55.459485    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:00:55.459582    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:00:55.475782    4103 logs.go:276] 4 containers: [28e2071c37ba 0359013c9c50 7e0d35fd301c b0e67f216cc7]
	I0829 12:00:55.475864    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:00:55.488146    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:00:55.488207    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:00:55.499036    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:00:55.499108    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:00:55.510133    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:00:55.510203    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:00:55.520527    4103 logs.go:276] 0 containers: []
	W0829 12:00:55.520541    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:00:55.520599    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:00:55.531299    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:00:55.531314    4103 logs.go:123] Gathering logs for coredns [28e2071c37ba] ...
	I0829 12:00:55.531319    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e2071c37ba"
	I0829 12:00:55.543575    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:00:55.543588    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:00:55.560104    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:00:55.560118    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:00:55.571716    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:00:55.571728    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:00:55.607353    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:55.607454    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:55.609215    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:00:55.609222    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:00:55.623935    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:00:55.623947    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:00:55.635980    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:00:55.635990    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:00:55.648998    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:00:55.649010    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:00:55.672871    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:00:55.672882    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:00:55.676910    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:00:55.676919    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:00:55.716648    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:00:55.716663    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:00:55.728224    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:00:55.728238    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:00:55.746679    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:00:55.746690    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:00:55.761599    4103 logs.go:123] Gathering logs for coredns [0359013c9c50] ...
	I0829 12:00:55.761609    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0359013c9c50"
	I0829 12:00:55.773916    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:00:55.773929    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:00:55.786355    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:55.786369    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:00:55.786395    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:00:55.786399    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:00:55.786403    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:00:55.786407    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:00:55.786411    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:01:05.790394    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:10.792683    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:10.792907    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:10.807488    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:01:10.807575    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:10.818840    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:01:10.818916    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:10.829200    4103 logs.go:276] 4 containers: [28e2071c37ba 0359013c9c50 7e0d35fd301c b0e67f216cc7]
	I0829 12:01:10.829268    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:10.839785    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:01:10.839856    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:10.850242    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:01:10.850310    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:10.860729    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:01:10.860795    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:10.870400    4103 logs.go:276] 0 containers: []
	W0829 12:01:10.870414    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:10.870469    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:10.880862    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:01:10.880880    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:01:10.880885    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:01:10.899848    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:01:10.899861    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:01:10.911709    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:10.911720    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:10.915983    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:01:10.915990    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:01:10.936054    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:01:10.936066    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:01:10.947546    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:01:10.947557    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:10.960845    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:10.960858    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:10.996603    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:10.996699    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:10.998535    4103 logs.go:123] Gathering logs for coredns [28e2071c37ba] ...
	I0829 12:01:10.998540    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e2071c37ba"
	I0829 12:01:11.010201    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:01:11.010214    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:01:11.024541    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:01:11.024550    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:01:11.038292    4103 logs.go:123] Gathering logs for coredns [0359013c9c50] ...
	I0829 12:01:11.038303    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0359013c9c50"
	I0829 12:01:11.050888    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:01:11.050899    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:01:11.063870    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:01:11.063881    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:01:11.076429    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:11.076440    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:11.101860    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:11.101869    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:11.136639    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:11.136649    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:11.136677    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:11.136683    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:11.136685    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:11.136690    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:11.136693    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:01:21.140825    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:26.143521    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:26.143834    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:26.162731    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:01:26.162821    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:26.176538    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:01:26.176606    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:26.191247    4103 logs.go:276] 4 containers: [28e2071c37ba 0359013c9c50 7e0d35fd301c b0e67f216cc7]
	I0829 12:01:26.191321    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:26.203043    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:01:26.203110    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:26.213394    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:01:26.213467    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:26.224252    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:01:26.224319    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:26.234439    4103 logs.go:276] 0 containers: []
	W0829 12:01:26.234449    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:26.234502    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:26.244837    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:01:26.244854    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:01:26.244859    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:01:26.257382    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:26.257393    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:26.281279    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:01:26.281287    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:01:26.294969    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:01:26.294979    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:01:26.306784    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:01:26.306798    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:01:26.323900    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:26.323911    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:26.361060    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:01:26.361074    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:01:26.372816    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:01:26.372829    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:26.386201    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:01:26.386216    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:01:26.398140    4103 logs.go:123] Gathering logs for coredns [28e2071c37ba] ...
	I0829 12:01:26.398154    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e2071c37ba"
	I0829 12:01:26.410190    4103 logs.go:123] Gathering logs for coredns [0359013c9c50] ...
	I0829 12:01:26.410203    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0359013c9c50"
	I0829 12:01:26.422023    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:01:26.422034    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:01:26.439251    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:26.439261    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:26.473871    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:26.473963    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:26.475732    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:26.475737    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:26.479776    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:01:26.479784    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:01:26.494605    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:26.494616    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:26.494642    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:26.494647    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:26.494650    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:26.494653    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:26.494656    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:01:36.498691    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:41.500880    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:41.501041    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:41.514256    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:01:41.514335    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:41.529775    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:01:41.529849    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:41.540445    4103 logs.go:276] 4 containers: [28e2071c37ba 0359013c9c50 7e0d35fd301c b0e67f216cc7]
	I0829 12:01:41.540519    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:41.551471    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:01:41.551532    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:41.562501    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:01:41.562565    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:41.573371    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:01:41.573432    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:41.583285    4103 logs.go:276] 0 containers: []
	W0829 12:01:41.583298    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:41.583356    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:41.593508    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:01:41.593528    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:01:41.593533    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:01:41.606016    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:01:41.606026    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:01:41.617399    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:01:41.617414    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:01:41.630778    4103 logs.go:123] Gathering logs for coredns [28e2071c37ba] ...
	I0829 12:01:41.630788    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e2071c37ba"
	I0829 12:01:41.641906    4103 logs.go:123] Gathering logs for coredns [0359013c9c50] ...
	I0829 12:01:41.641916    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0359013c9c50"
	I0829 12:01:41.661946    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:01:41.661956    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:01:41.679415    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:01:41.679427    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:01:41.694227    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:01:41.694239    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:01:41.706510    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:41.706521    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:41.733086    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:01:41.733103    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:41.745019    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:41.745031    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:41.781864    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:41.781962    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:41.783716    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:41.783722    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:41.788341    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:41.788349    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:41.823890    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:01:41.823904    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:01:41.837952    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:01:41.837965    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:01:41.851260    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:41.851272    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:41.851299    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:41.851339    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:41.851354    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:41.851364    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:41.851386    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:01:51.855487    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:01:56.857668    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:01:56.857831    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:01:56.870185    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:01:56.870264    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:01:56.881801    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:01:56.881861    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:01:56.892292    4103 logs.go:276] 4 containers: [28e2071c37ba 0359013c9c50 7e0d35fd301c b0e67f216cc7]
	I0829 12:01:56.892367    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:01:56.903386    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:01:56.903448    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:01:56.914431    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:01:56.914490    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:01:56.925111    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:01:56.925180    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:01:56.934744    4103 logs.go:276] 0 containers: []
	W0829 12:01:56.934757    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:01:56.934814    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:01:56.945163    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:01:56.945181    4103 logs.go:123] Gathering logs for coredns [28e2071c37ba] ...
	I0829 12:01:56.945187    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e2071c37ba"
	I0829 12:01:56.959250    4103 logs.go:123] Gathering logs for coredns [0359013c9c50] ...
	I0829 12:01:56.959265    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0359013c9c50"
	I0829 12:01:56.971118    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:01:56.971131    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:01:56.982815    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:01:56.982830    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:01:57.018521    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:57.018614    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:57.020372    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:01:57.020376    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:01:57.024505    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:01:57.024513    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:01:57.059441    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:01:57.059453    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:01:57.075391    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:01:57.075404    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:01:57.090269    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:01:57.090280    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:01:57.114953    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:01:57.114964    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:01:57.132441    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:01:57.132452    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:01:57.144159    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:01:57.144169    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:01:57.159692    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:01:57.159707    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:01:57.174475    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:01:57.174485    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:01:57.194047    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:01:57.194060    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:01:57.206393    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:57.206406    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:01:57.206432    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:01:57.206437    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:01:57.206441    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:01:57.206446    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:01:57.206449    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:02:07.209851    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:02:12.212050    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:02:12.212212    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:02:12.232741    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:02:12.232842    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:02:12.248256    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:02:12.248333    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:02:12.260808    4103 logs.go:276] 4 containers: [28e2071c37ba 0359013c9c50 7e0d35fd301c b0e67f216cc7]
	I0829 12:02:12.260885    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:02:12.271593    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:02:12.271660    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:02:12.281505    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:02:12.281576    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:02:12.299252    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:02:12.299319    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:02:12.312278    4103 logs.go:276] 0 containers: []
	W0829 12:02:12.312289    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:02:12.312341    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:02:12.322555    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:02:12.322574    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:02:12.322581    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:02:12.360479    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:02:12.360581    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:02:12.362455    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:02:12.362465    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:02:12.380877    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:02:12.380890    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:02:12.399459    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:02:12.399472    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:02:12.414413    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:02:12.414423    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:02:12.425837    4103 logs.go:123] Gathering logs for coredns [28e2071c37ba] ...
	I0829 12:02:12.425848    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e2071c37ba"
	I0829 12:02:12.437697    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:02:12.437707    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:02:12.449665    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:02:12.449677    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:02:12.467670    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:02:12.467682    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:02:12.479208    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:02:12.479222    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:02:12.490793    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:02:12.490804    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:02:12.515454    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:02:12.515464    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:02:12.519410    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:02:12.519416    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:02:12.553482    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:02:12.553495    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:02:12.567942    4103 logs.go:123] Gathering logs for coredns [0359013c9c50] ...
	I0829 12:02:12.567954    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0359013c9c50"
	I0829 12:02:12.579517    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:12.579530    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:02:12.579558    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:02:12.579563    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:02:12.579567    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:02:12.579572    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:12.579574    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:02:22.583594    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:02:27.585866    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:02:27.586059    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0829 12:02:27.601040    4103 logs.go:276] 1 containers: [4f3fc224617b]
	I0829 12:02:27.601121    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0829 12:02:27.613411    4103 logs.go:276] 1 containers: [a2c34b18c76e]
	I0829 12:02:27.613478    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0829 12:02:27.624057    4103 logs.go:276] 4 containers: [28e2071c37ba 0359013c9c50 7e0d35fd301c b0e67f216cc7]
	I0829 12:02:27.624126    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0829 12:02:27.654893    4103 logs.go:276] 1 containers: [54debee86044]
	I0829 12:02:27.654963    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0829 12:02:27.676396    4103 logs.go:276] 1 containers: [300a11d66e22]
	I0829 12:02:27.676466    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0829 12:02:27.690989    4103 logs.go:276] 1 containers: [dbb9f045fa3d]
	I0829 12:02:27.691052    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0829 12:02:27.700906    4103 logs.go:276] 0 containers: []
	W0829 12:02:27.700920    4103 logs.go:278] No container was found matching "kindnet"
	I0829 12:02:27.700979    4103 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0829 12:02:27.711865    4103 logs.go:276] 1 containers: [5a1fa9b460aa]
	I0829 12:02:27.711880    4103 logs.go:123] Gathering logs for kubelet ...
	I0829 12:02:27.711886    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0829 12:02:27.747681    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:02:27.747776    4103 logs.go:138] Found kubelet problem: Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:02:27.749535    4103 logs.go:123] Gathering logs for dmesg ...
	I0829 12:02:27.749542    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0829 12:02:27.753769    4103 logs.go:123] Gathering logs for kube-controller-manager [dbb9f045fa3d] ...
	I0829 12:02:27.753779    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbb9f045fa3d"
	I0829 12:02:27.771954    4103 logs.go:123] Gathering logs for storage-provisioner [5a1fa9b460aa] ...
	I0829 12:02:27.771967    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5a1fa9b460aa"
	I0829 12:02:27.784085    4103 logs.go:123] Gathering logs for Docker ...
	I0829 12:02:27.784096    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0829 12:02:27.807246    4103 logs.go:123] Gathering logs for kube-apiserver [4f3fc224617b] ...
	I0829 12:02:27.807257    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f3fc224617b"
	I0829 12:02:27.821611    4103 logs.go:123] Gathering logs for coredns [0359013c9c50] ...
	I0829 12:02:27.821620    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0359013c9c50"
	I0829 12:02:27.833098    4103 logs.go:123] Gathering logs for coredns [7e0d35fd301c] ...
	I0829 12:02:27.833111    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7e0d35fd301c"
	I0829 12:02:27.844726    4103 logs.go:123] Gathering logs for coredns [b0e67f216cc7] ...
	I0829 12:02:27.844736    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0e67f216cc7"
	I0829 12:02:27.856766    4103 logs.go:123] Gathering logs for container status ...
	I0829 12:02:27.856777    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0829 12:02:27.868397    4103 logs.go:123] Gathering logs for describe nodes ...
	I0829 12:02:27.868411    4103 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0829 12:02:27.939872    4103 logs.go:123] Gathering logs for etcd [a2c34b18c76e] ...
	I0829 12:02:27.939886    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a2c34b18c76e"
	I0829 12:02:27.954107    4103 logs.go:123] Gathering logs for coredns [28e2071c37ba] ...
	I0829 12:02:27.954121    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 28e2071c37ba"
	I0829 12:02:27.966188    4103 logs.go:123] Gathering logs for kube-scheduler [54debee86044] ...
	I0829 12:02:27.966199    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 54debee86044"
	I0829 12:02:27.982050    4103 logs.go:123] Gathering logs for kube-proxy [300a11d66e22] ...
	I0829 12:02:27.982064    4103 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 300a11d66e22"
	I0829 12:02:27.993950    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:27.993960    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0829 12:02:27.993986    4103 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0829 12:02:27.994002    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: W0829 18:58:52.247392   10170 reflector.go:324] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	W0829 12:02:27.994007    4103 out.go:270]   Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	  Aug 29 18:58:52 stopped-upgrade-585000 kubelet[10170]: E0829 18:58:52.247517   10170 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:stopped-upgrade-585000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-585000' and this object
	I0829 12:02:27.994014    4103 out.go:358] Setting ErrFile to fd 2...
	I0829 12:02:27.994017    4103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:02:37.997078    4103 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0829 12:02:42.999409    4103 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0829 12:02:43.004223    4103 out.go:201] 
	W0829 12:02:43.008091    4103 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0829 12:02:43.008112    4103 out.go:270] * 
	* 
	W0829 12:02:43.009643    4103 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:02:43.020099    4103 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-585000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (610.03s)

                                                
                                    
x
+
TestPause/serial/Start (10.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-799000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-799000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.05850325s)

                                                
                                                
-- stdout --
	* [pause-799000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-799000" primary control-plane node in "pause-799000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-799000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-799000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-799000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-799000 -n pause-799000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-799000 -n pause-799000: exit status 7 (59.150458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-799000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-185000 --driver=qemu2 
E0829 12:03:32.534205    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-185000 --driver=qemu2 : exit status 80 (9.926445s)

                                                
                                                
-- stdout --
	* [NoKubernetes-185000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-185000" primary control-plane node in "NoKubernetes-185000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-185000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-185000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-185000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000: exit status 7 (51.094958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --driver=qemu2 : exit status 80 (5.264790042s)

                                                
                                                
-- stdout --
	* [NoKubernetes-185000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-185000
	* Restarting existing qemu2 VM for "NoKubernetes-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-185000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000: exit status 7 (69.15075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --driver=qemu2 : exit status 80 (6.011160625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-185000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-185000
	* Restarting existing qemu2 VM for "NoKubernetes-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-185000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000: exit status 7 (61.045125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (6.07s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.87s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.87s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.48s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19531
- KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2723932827/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-185000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-185000 --driver=qemu2 : exit status 80 (5.272534083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-185000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-185000
	* Restarting existing qemu2 VM for "NoKubernetes-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-185000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-185000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-185000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-185000 -n NoKubernetes-185000: exit status 7 (38.695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-185000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.8566235s)

                                                
                                                
-- stdout --
	* [auto-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-015000" primary control-plane node in "auto-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:04:28.782536    4921 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:04:28.782660    4921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:28.782664    4921 out.go:358] Setting ErrFile to fd 2...
	I0829 12:04:28.782666    4921 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:28.782770    4921 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:04:28.783843    4921 out.go:352] Setting JSON to false
	I0829 12:04:28.800075    4921 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3832,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:04:28.800146    4921 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:04:28.806678    4921 out.go:177] * [auto-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:04:28.815563    4921 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:04:28.815609    4921 notify.go:220] Checking for updates...
	I0829 12:04:28.822490    4921 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:04:28.825537    4921 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:04:28.827145    4921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:04:28.830519    4921 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:04:28.833549    4921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:04:28.836857    4921 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:28.836929    4921 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:28.836986    4921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:04:28.841488    4921 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:04:28.848502    4921 start.go:297] selected driver: qemu2
	I0829 12:04:28.848508    4921 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:04:28.848519    4921 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:04:28.850777    4921 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:04:28.854568    4921 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:04:28.857756    4921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:04:28.857779    4921 cni.go:84] Creating CNI manager for ""
	I0829 12:04:28.857789    4921 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:04:28.857794    4921 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:04:28.857831    4921 start.go:340] cluster config:
	{Name:auto-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:auto-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:04:28.861718    4921 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:04:28.869580    4921 out.go:177] * Starting "auto-015000" primary control-plane node in "auto-015000" cluster
	I0829 12:04:28.873525    4921 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:04:28.873543    4921 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:04:28.873559    4921 cache.go:56] Caching tarball of preloaded images
	I0829 12:04:28.873635    4921 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:04:28.873643    4921 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:04:28.873718    4921 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/auto-015000/config.json ...
	I0829 12:04:28.873730    4921 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/auto-015000/config.json: {Name:mk50edbcb0b59ce75a9a58a0c0b9c0757727d0e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:04:28.873972    4921 start.go:360] acquireMachinesLock for auto-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:28.874010    4921 start.go:364] duration metric: took 31.5µs to acquireMachinesLock for "auto-015000"
	I0829 12:04:28.874022    4921 start.go:93] Provisioning new machine with config: &{Name:auto-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:28.874071    4921 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:28.881523    4921 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:04:28.901314    4921 start.go:159] libmachine.API.Create for "auto-015000" (driver="qemu2")
	I0829 12:04:28.901340    4921 client.go:168] LocalClient.Create starting
	I0829 12:04:28.901402    4921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:28.901433    4921 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:28.901442    4921 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:28.901482    4921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:28.901508    4921 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:28.901515    4921 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:28.901891    4921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:29.064760    4921 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:29.092167    4921 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:29.092173    4921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:29.092347    4921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2
	I0829 12:04:29.101738    4921 main.go:141] libmachine: STDOUT: 
	I0829 12:04:29.101756    4921 main.go:141] libmachine: STDERR: 
	I0829 12:04:29.101822    4921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2 +20000M
	I0829 12:04:29.109833    4921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:29.109846    4921 main.go:141] libmachine: STDERR: 
	I0829 12:04:29.109861    4921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2
	I0829 12:04:29.109867    4921 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:29.109878    4921 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:29.109900    4921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:a9:f6:e0:df:2a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2
	I0829 12:04:29.111521    4921 main.go:141] libmachine: STDOUT: 
	I0829 12:04:29.111537    4921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:29.111554    4921 client.go:171] duration metric: took 210.211166ms to LocalClient.Create
	I0829 12:04:31.113786    4921 start.go:128] duration metric: took 2.239726292s to createHost
	I0829 12:04:31.113846    4921 start.go:83] releasing machines lock for "auto-015000", held for 2.239858708s
	W0829 12:04:31.113924    4921 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:31.134106    4921 out.go:177] * Deleting "auto-015000" in qemu2 ...
	W0829 12:04:31.166531    4921 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:31.166561    4921 start.go:729] Will try again in 5 seconds ...
	I0829 12:04:36.167033    4921 start.go:360] acquireMachinesLock for auto-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:36.167516    4921 start.go:364] duration metric: took 369.958µs to acquireMachinesLock for "auto-015000"
	I0829 12:04:36.167636    4921 start.go:93] Provisioning new machine with config: &{Name:auto-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.0 ClusterName:auto-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:36.168111    4921 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:36.177656    4921 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:04:36.230305    4921 start.go:159] libmachine.API.Create for "auto-015000" (driver="qemu2")
	I0829 12:04:36.230348    4921 client.go:168] LocalClient.Create starting
	I0829 12:04:36.230489    4921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:36.230555    4921 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:36.230573    4921 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:36.230630    4921 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:36.230674    4921 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:36.230685    4921 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:36.231174    4921 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:36.411266    4921 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:36.544049    4921 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:36.544055    4921 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:36.544241    4921 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2
	I0829 12:04:36.553914    4921 main.go:141] libmachine: STDOUT: 
	I0829 12:04:36.553933    4921 main.go:141] libmachine: STDERR: 
	I0829 12:04:36.553987    4921 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2 +20000M
	I0829 12:04:36.561945    4921 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:36.561962    4921 main.go:141] libmachine: STDERR: 
	I0829 12:04:36.561973    4921 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2
	I0829 12:04:36.561977    4921 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:36.561984    4921 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:36.562015    4921 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:e6:ba:c9:78:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/auto-015000/disk.qcow2
	I0829 12:04:36.563676    4921 main.go:141] libmachine: STDOUT: 
	I0829 12:04:36.563692    4921 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:36.563704    4921 client.go:171] duration metric: took 333.356125ms to LocalClient.Create
	I0829 12:04:38.565851    4921 start.go:128] duration metric: took 2.397738834s to createHost
	I0829 12:04:38.565913    4921 start.go:83] releasing machines lock for "auto-015000", held for 2.398404875s
	W0829 12:04:38.566277    4921 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:38.574754    4921 out.go:201] 
	W0829 12:04:38.584905    4921 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:04:38.584941    4921 out.go:270] * 
	* 
	W0829 12:04:38.587556    4921 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:04:38.596788    4921 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (10.0336145s)

                                                
                                                
-- stdout --
	* [calico-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-015000" primary control-plane node in "calico-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:04:40.758784    5031 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:04:40.758916    5031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:40.758919    5031 out.go:358] Setting ErrFile to fd 2...
	I0829 12:04:40.758921    5031 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:40.759051    5031 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:04:40.760119    5031 out.go:352] Setting JSON to false
	I0829 12:04:40.776006    5031 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3844,"bootTime":1724954436,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:04:40.776078    5031 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:04:40.782619    5031 out.go:177] * [calico-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:04:40.789428    5031 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:04:40.789480    5031 notify.go:220] Checking for updates...
	I0829 12:04:40.795437    5031 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:04:40.798360    5031 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:04:40.801435    5031 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:04:40.804485    5031 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:04:40.807450    5031 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:04:40.810747    5031 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:40.810813    5031 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:40.810865    5031 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:04:40.815435    5031 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:04:40.822497    5031 start.go:297] selected driver: qemu2
	I0829 12:04:40.822504    5031 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:04:40.822513    5031 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:04:40.824700    5031 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:04:40.828451    5031 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:04:40.831558    5031 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:04:40.831575    5031 cni.go:84] Creating CNI manager for "calico"
	I0829 12:04:40.831578    5031 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0829 12:04:40.831610    5031 start.go:340] cluster config:
	{Name:calico-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:calico-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:04:40.835099    5031 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:04:40.842459    5031 out.go:177] * Starting "calico-015000" primary control-plane node in "calico-015000" cluster
	I0829 12:04:40.846449    5031 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:04:40.846465    5031 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:04:40.846475    5031 cache.go:56] Caching tarball of preloaded images
	I0829 12:04:40.846535    5031 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:04:40.846542    5031 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:04:40.846608    5031 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/calico-015000/config.json ...
	I0829 12:04:40.846621    5031 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/calico-015000/config.json: {Name:mkc2f5aee3cf8ba673001469faaa677cfc79112c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:04:40.846844    5031 start.go:360] acquireMachinesLock for calico-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:40.846878    5031 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "calico-015000"
	I0829 12:04:40.846888    5031 start.go:93] Provisioning new machine with config: &{Name:calico-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:40.846919    5031 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:40.855461    5031 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:04:40.873266    5031 start.go:159] libmachine.API.Create for "calico-015000" (driver="qemu2")
	I0829 12:04:40.873303    5031 client.go:168] LocalClient.Create starting
	I0829 12:04:40.873375    5031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:40.873407    5031 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:40.873417    5031 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:40.873455    5031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:40.873482    5031 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:40.873492    5031 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:40.873846    5031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:41.033630    5031 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:41.188747    5031 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:41.188755    5031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:41.188938    5031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2
	I0829 12:04:41.198602    5031 main.go:141] libmachine: STDOUT: 
	I0829 12:04:41.198621    5031 main.go:141] libmachine: STDERR: 
	I0829 12:04:41.198681    5031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2 +20000M
	I0829 12:04:41.206936    5031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:41.206951    5031 main.go:141] libmachine: STDERR: 
	I0829 12:04:41.206967    5031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2
	I0829 12:04:41.206971    5031 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:41.206985    5031 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:41.207015    5031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:1d:fd:a0:5d:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2
	I0829 12:04:41.208652    5031 main.go:141] libmachine: STDOUT: 
	I0829 12:04:41.208667    5031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:41.208686    5031 client.go:171] duration metric: took 335.381708ms to LocalClient.Create
	I0829 12:04:43.210914    5031 start.go:128] duration metric: took 2.363997084s to createHost
	I0829 12:04:43.211000    5031 start.go:83] releasing machines lock for "calico-015000", held for 2.364146292s
	W0829 12:04:43.211223    5031 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:43.232566    5031 out.go:177] * Deleting "calico-015000" in qemu2 ...
	W0829 12:04:43.265267    5031 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:43.265300    5031 start.go:729] Will try again in 5 seconds ...
	I0829 12:04:48.267510    5031 start.go:360] acquireMachinesLock for calico-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:48.268093    5031 start.go:364] duration metric: took 469.75µs to acquireMachinesLock for "calico-015000"
	I0829 12:04:48.268253    5031 start.go:93] Provisioning new machine with config: &{Name:calico-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:calico-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:48.268572    5031 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:48.278262    5031 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:04:48.329349    5031 start.go:159] libmachine.API.Create for "calico-015000" (driver="qemu2")
	I0829 12:04:48.329398    5031 client.go:168] LocalClient.Create starting
	I0829 12:04:48.329521    5031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:48.329598    5031 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:48.329615    5031 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:48.329675    5031 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:48.329721    5031 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:48.329733    5031 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:48.330278    5031 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:48.521920    5031 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:48.700978    5031 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:48.700985    5031 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:48.701218    5031 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2
	I0829 12:04:48.711086    5031 main.go:141] libmachine: STDOUT: 
	I0829 12:04:48.711106    5031 main.go:141] libmachine: STDERR: 
	I0829 12:04:48.711156    5031 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2 +20000M
	I0829 12:04:48.719275    5031 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:48.719291    5031 main.go:141] libmachine: STDERR: 
	I0829 12:04:48.719305    5031 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2
	I0829 12:04:48.719313    5031 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:48.719321    5031 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:48.719345    5031 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:87:1e:23:06:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/calico-015000/disk.qcow2
	I0829 12:04:48.720918    5031 main.go:141] libmachine: STDOUT: 
	I0829 12:04:48.720933    5031 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:48.720946    5031 client.go:171] duration metric: took 391.549167ms to LocalClient.Create
	I0829 12:04:50.723104    5031 start.go:128] duration metric: took 2.454514s to createHost
	I0829 12:04:50.723166    5031 start.go:83] releasing machines lock for "calico-015000", held for 2.455071416s
	W0829 12:04:50.723505    5031 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:50.738124    5031 out.go:201] 
	W0829 12:04:50.741288    5031 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:04:50.741349    5031 out.go:270] * 
	* 
	W0829 12:04:50.743861    5031 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:04:50.751156    5031 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.917302042s)

                                                
                                                
-- stdout --
	* [custom-flannel-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-015000" primary control-plane node in "custom-flannel-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:04:53.121823    5154 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:04:53.122017    5154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:53.122020    5154 out.go:358] Setting ErrFile to fd 2...
	I0829 12:04:53.122023    5154 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:04:53.122141    5154 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:04:53.123192    5154 out.go:352] Setting JSON to false
	I0829 12:04:53.139297    5154 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3857,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:04:53.139406    5154 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:04:53.145470    5154 out.go:177] * [custom-flannel-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:04:53.155280    5154 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:04:53.155325    5154 notify.go:220] Checking for updates...
	I0829 12:04:53.160784    5154 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:04:53.164250    5154 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:04:53.167269    5154 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:04:53.170312    5154 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:04:53.173300    5154 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:04:53.176621    5154 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:53.176698    5154 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:04:53.176746    5154 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:04:53.181229    5154 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:04:53.188270    5154 start.go:297] selected driver: qemu2
	I0829 12:04:53.188277    5154 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:04:53.188283    5154 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:04:53.190576    5154 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:04:53.193321    5154 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:04:53.196302    5154 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:04:53.196333    5154 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0829 12:04:53.196359    5154 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0829 12:04:53.196399    5154 start.go:340] cluster config:
	{Name:custom-flannel-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:04:53.200178    5154 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:04:53.208123    5154 out.go:177] * Starting "custom-flannel-015000" primary control-plane node in "custom-flannel-015000" cluster
	I0829 12:04:53.212248    5154 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:04:53.212266    5154 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:04:53.212281    5154 cache.go:56] Caching tarball of preloaded images
	I0829 12:04:53.212384    5154 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:04:53.212394    5154 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:04:53.212468    5154 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/custom-flannel-015000/config.json ...
	I0829 12:04:53.212481    5154 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/custom-flannel-015000/config.json: {Name:mkc0494917b1d40ce2addc603ebc5ea1c42ed6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:04:53.212723    5154 start.go:360] acquireMachinesLock for custom-flannel-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:04:53.212764    5154 start.go:364] duration metric: took 31.959µs to acquireMachinesLock for "custom-flannel-015000"
	I0829 12:04:53.212776    5154 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:04:53.212824    5154 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:04:53.221239    5154 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:04:53.240087    5154 start.go:159] libmachine.API.Create for "custom-flannel-015000" (driver="qemu2")
	I0829 12:04:53.240191    5154 client.go:168] LocalClient.Create starting
	I0829 12:04:53.240267    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:04:53.240299    5154 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:53.240309    5154 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:53.240347    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:04:53.240375    5154 main.go:141] libmachine: Decoding PEM data...
	I0829 12:04:53.240381    5154 main.go:141] libmachine: Parsing certificate...
	I0829 12:04:53.240742    5154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:04:53.403065    5154 main.go:141] libmachine: Creating SSH key...
	I0829 12:04:53.474065    5154 main.go:141] libmachine: Creating Disk image...
	I0829 12:04:53.474072    5154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:04:53.474601    5154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2
	I0829 12:04:53.483955    5154 main.go:141] libmachine: STDOUT: 
	I0829 12:04:53.483973    5154 main.go:141] libmachine: STDERR: 
	I0829 12:04:53.484023    5154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2 +20000M
	I0829 12:04:53.491952    5154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:04:53.491976    5154 main.go:141] libmachine: STDERR: 
	I0829 12:04:53.491990    5154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2
	I0829 12:04:53.491995    5154 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:04:53.492005    5154 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:04:53.492031    5154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:a3:bf:04:1b:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2
	I0829 12:04:53.493607    5154 main.go:141] libmachine: STDOUT: 
	I0829 12:04:53.493627    5154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:04:53.493645    5154 client.go:171] duration metric: took 253.453792ms to LocalClient.Create
	I0829 12:04:55.495793    5154 start.go:128] duration metric: took 2.282977667s to createHost
	I0829 12:04:55.495865    5154 start.go:83] releasing machines lock for "custom-flannel-015000", held for 2.283123542s
	W0829 12:04:55.495979    5154 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:55.508131    5154 out.go:177] * Deleting "custom-flannel-015000" in qemu2 ...
	W0829 12:04:55.548825    5154 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:04:55.548850    5154 start.go:729] Will try again in 5 seconds ...
	I0829 12:05:00.551091    5154 start.go:360] acquireMachinesLock for custom-flannel-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:00.551645    5154 start.go:364] duration metric: took 414.334µs to acquireMachinesLock for "custom-flannel-015000"
	I0829 12:05:00.551779    5154 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.0 ClusterName:custom-flannel-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:00.552039    5154 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:00.561748    5154 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:00.614461    5154 start.go:159] libmachine.API.Create for "custom-flannel-015000" (driver="qemu2")
	I0829 12:05:00.614517    5154 client.go:168] LocalClient.Create starting
	I0829 12:05:00.614623    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:00.614690    5154 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:00.614707    5154 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:00.614772    5154 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:00.614814    5154 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:00.614825    5154 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:00.615360    5154 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:00.787382    5154 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:00.937181    5154 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:00.937188    5154 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:00.937365    5154 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2
	I0829 12:05:00.947166    5154 main.go:141] libmachine: STDOUT: 
	I0829 12:05:00.947183    5154 main.go:141] libmachine: STDERR: 
	I0829 12:05:00.947235    5154 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2 +20000M
	I0829 12:05:00.955267    5154 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:00.955324    5154 main.go:141] libmachine: STDERR: 
	I0829 12:05:00.955336    5154 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2
	I0829 12:05:00.955340    5154 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:00.955355    5154 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:00.955387    5154 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:40:20:ca:8b:a5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/custom-flannel-015000/disk.qcow2
	I0829 12:05:00.956970    5154 main.go:141] libmachine: STDOUT: 
	I0829 12:05:00.956994    5154 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:00.957006    5154 client.go:171] duration metric: took 342.488ms to LocalClient.Create
	I0829 12:05:02.959151    5154 start.go:128] duration metric: took 2.407120792s to createHost
	I0829 12:05:02.959200    5154 start.go:83] releasing machines lock for "custom-flannel-015000", held for 2.4075625s
	W0829 12:05:02.959554    5154 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:02.976234    5154 out.go:201] 
	W0829 12:05:02.980462    5154 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:05:02.980486    5154 out.go:270] * 
	* 
	W0829 12:05:02.982880    5154 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:05:02.997288    5154 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.008186834s)

                                                
                                                
-- stdout --
	* [false-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-015000" primary control-plane node in "false-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:05:05.396898    5274 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:05:05.397017    5274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:05.397021    5274 out.go:358] Setting ErrFile to fd 2...
	I0829 12:05:05.397024    5274 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:05.397152    5274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:05:05.398164    5274 out.go:352] Setting JSON to false
	I0829 12:05:05.414412    5274 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3869,"bootTime":1724954436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:05:05.414474    5274 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:05:05.421998    5274 out.go:177] * [false-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:05:05.429797    5274 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:05:05.429831    5274 notify.go:220] Checking for updates...
	I0829 12:05:05.435788    5274 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:05:05.438770    5274 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:05:05.441823    5274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:05:05.444765    5274 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:05:05.447780    5274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:05:05.451094    5274 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:05.451173    5274 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:05.451231    5274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:05:05.455764    5274 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:05:05.462823    5274 start.go:297] selected driver: qemu2
	I0829 12:05:05.462832    5274 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:05:05.462840    5274 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:05:05.465115    5274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:05:05.467809    5274 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:05:05.470905    5274 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:05:05.470961    5274 cni.go:84] Creating CNI manager for "false"
	I0829 12:05:05.471006    5274 start.go:340] cluster config:
	{Name:false-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:false-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:05:05.474712    5274 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:05:05.482807    5274 out.go:177] * Starting "false-015000" primary control-plane node in "false-015000" cluster
	I0829 12:05:05.486777    5274 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:05:05.486791    5274 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:05:05.486799    5274 cache.go:56] Caching tarball of preloaded images
	I0829 12:05:05.486865    5274 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:05:05.486871    5274 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:05:05.486930    5274 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/false-015000/config.json ...
	I0829 12:05:05.486945    5274 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/false-015000/config.json: {Name:mk42d57a586a5cd4d6847eccffde52b6f5d97ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:05:05.487169    5274 start.go:360] acquireMachinesLock for false-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:05.487203    5274 start.go:364] duration metric: took 27.959µs to acquireMachinesLock for "false-015000"
	I0829 12:05:05.487213    5274 start.go:93] Provisioning new machine with config: &{Name:false-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:05.487243    5274 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:05.495803    5274 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:05.514244    5274 start.go:159] libmachine.API.Create for "false-015000" (driver="qemu2")
	I0829 12:05:05.514271    5274 client.go:168] LocalClient.Create starting
	I0829 12:05:05.514350    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:05.514381    5274 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:05.514390    5274 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:05.514431    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:05.514456    5274 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:05.514464    5274 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:05.514967    5274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:05.676926    5274 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:05.774581    5274 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:05.774586    5274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:05.774755    5274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2
	I0829 12:05:05.784149    5274 main.go:141] libmachine: STDOUT: 
	I0829 12:05:05.784167    5274 main.go:141] libmachine: STDERR: 
	I0829 12:05:05.784224    5274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2 +20000M
	I0829 12:05:05.792158    5274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:05.792172    5274 main.go:141] libmachine: STDERR: 
	I0829 12:05:05.792186    5274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2
	I0829 12:05:05.792189    5274 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:05.792203    5274 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:05.792231    5274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:18:ff:6b:a1:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2
	I0829 12:05:05.793800    5274 main.go:141] libmachine: STDOUT: 
	I0829 12:05:05.793816    5274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:05.793838    5274 client.go:171] duration metric: took 279.566375ms to LocalClient.Create
	I0829 12:05:07.796001    5274 start.go:128] duration metric: took 2.308769375s to createHost
	I0829 12:05:07.796047    5274 start.go:83] releasing machines lock for "false-015000", held for 2.30886525s
	W0829 12:05:07.796116    5274 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:07.813379    5274 out.go:177] * Deleting "false-015000" in qemu2 ...
	W0829 12:05:07.844478    5274 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:07.844501    5274 start.go:729] Will try again in 5 seconds ...
	I0829 12:05:12.846679    5274 start.go:360] acquireMachinesLock for false-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:12.847120    5274 start.go:364] duration metric: took 340.75µs to acquireMachinesLock for "false-015000"
	I0829 12:05:12.847260    5274 start.go:93] Provisioning new machine with config: &{Name:false-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:false-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:12.847530    5274 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:12.855205    5274 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:12.904703    5274 start.go:159] libmachine.API.Create for "false-015000" (driver="qemu2")
	I0829 12:05:12.904757    5274 client.go:168] LocalClient.Create starting
	I0829 12:05:12.904860    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:12.904930    5274 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:12.904946    5274 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:12.905014    5274 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:12.905066    5274 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:12.905085    5274 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:12.905676    5274 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:13.079025    5274 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:13.308979    5274 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:13.308992    5274 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:13.309236    5274 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2
	I0829 12:05:13.318905    5274 main.go:141] libmachine: STDOUT: 
	I0829 12:05:13.318927    5274 main.go:141] libmachine: STDERR: 
	I0829 12:05:13.318986    5274 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2 +20000M
	I0829 12:05:13.327095    5274 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:13.327109    5274 main.go:141] libmachine: STDERR: 
	I0829 12:05:13.327122    5274 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2
	I0829 12:05:13.327126    5274 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:13.327136    5274 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:13.327174    5274 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:d1:41:26:64:1d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/false-015000/disk.qcow2
	I0829 12:05:13.328812    5274 main.go:141] libmachine: STDOUT: 
	I0829 12:05:13.328827    5274 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:13.328847    5274 client.go:171] duration metric: took 424.091083ms to LocalClient.Create
	I0829 12:05:15.330965    5274 start.go:128] duration metric: took 2.483479334s to createHost
	I0829 12:05:15.331035    5274 start.go:83] releasing machines lock for "false-015000", held for 2.483967083s
	W0829 12:05:15.331487    5274 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:15.341186    5274 out.go:201] 
	W0829 12:05:15.351332    5274 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:05:15.351375    5274 out.go:270] * 
	* 
	W0829 12:05:15.354129    5274 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:05:15.362105    5274 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.829441291s)

                                                
                                                
-- stdout --
	* [kindnet-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-015000" primary control-plane node in "kindnet-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:05:17.577257    5387 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:05:17.577446    5387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:17.577451    5387 out.go:358] Setting ErrFile to fd 2...
	I0829 12:05:17.577453    5387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:17.577576    5387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:05:17.578640    5387 out.go:352] Setting JSON to false
	I0829 12:05:17.594667    5387 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3881,"bootTime":1724954436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:05:17.594746    5387 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:05:17.601108    5387 out.go:177] * [kindnet-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:05:17.609855    5387 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:05:17.609919    5387 notify.go:220] Checking for updates...
	I0829 12:05:17.615844    5387 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:05:17.618808    5387 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:05:17.620267    5387 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:05:17.623778    5387 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:05:17.626834    5387 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:05:17.630074    5387 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:17.630147    5387 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:17.630192    5387 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:05:17.634766    5387 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:05:17.641753    5387 start.go:297] selected driver: qemu2
	I0829 12:05:17.641759    5387 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:05:17.641767    5387 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:05:17.643958    5387 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:05:17.647886    5387 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:05:17.650813    5387 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:05:17.650835    5387 cni.go:84] Creating CNI manager for "kindnet"
	I0829 12:05:17.650839    5387 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0829 12:05:17.650875    5387 start.go:340] cluster config:
	{Name:kindnet-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kindnet-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:05:17.654530    5387 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:05:17.661752    5387 out.go:177] * Starting "kindnet-015000" primary control-plane node in "kindnet-015000" cluster
	I0829 12:05:17.665751    5387 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:05:17.665766    5387 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:05:17.665778    5387 cache.go:56] Caching tarball of preloaded images
	I0829 12:05:17.665841    5387 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:05:17.665848    5387 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:05:17.665915    5387 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/kindnet-015000/config.json ...
	I0829 12:05:17.665931    5387 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/kindnet-015000/config.json: {Name:mkfcf1a38e88a80f79d66bf453a1bde9c797df12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:05:17.666170    5387 start.go:360] acquireMachinesLock for kindnet-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:17.666204    5387 start.go:364] duration metric: took 27.75µs to acquireMachinesLock for "kindnet-015000"
	I0829 12:05:17.666214    5387 start.go:93] Provisioning new machine with config: &{Name:kindnet-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:17.666244    5387 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:17.674735    5387 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:17.692876    5387 start.go:159] libmachine.API.Create for "kindnet-015000" (driver="qemu2")
	I0829 12:05:17.692905    5387 client.go:168] LocalClient.Create starting
	I0829 12:05:17.692975    5387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:17.693005    5387 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:17.693015    5387 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:17.693054    5387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:17.693078    5387 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:17.693089    5387 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:17.693446    5387 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:17.853926    5387 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:17.924010    5387 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:17.924015    5387 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:17.924199    5387 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2
	I0829 12:05:17.933721    5387 main.go:141] libmachine: STDOUT: 
	I0829 12:05:17.933738    5387 main.go:141] libmachine: STDERR: 
	I0829 12:05:17.933803    5387 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2 +20000M
	I0829 12:05:17.941703    5387 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:17.941722    5387 main.go:141] libmachine: STDERR: 
	I0829 12:05:17.941733    5387 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2
	I0829 12:05:17.941739    5387 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:17.941749    5387 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:17.941774    5387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:90:0c:94:77:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2
	I0829 12:05:17.943304    5387 main.go:141] libmachine: STDOUT: 
	I0829 12:05:17.943318    5387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:17.943338    5387 client.go:171] duration metric: took 250.758ms to LocalClient.Create
	I0829 12:05:19.943049    5387 start.go:128] duration metric: took 2.279618208s to createHost
	I0829 12:05:19.943112    5387 start.go:83] releasing machines lock for "kindnet-015000", held for 2.279734125s
	W0829 12:05:19.943179    5387 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:19.954310    5387 out.go:177] * Deleting "kindnet-015000" in qemu2 ...
	W0829 12:05:19.993467    5387 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:19.993493    5387 start.go:729] Will try again in 5 seconds ...
	I0829 12:05:24.990768    5387 start.go:360] acquireMachinesLock for kindnet-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:24.991212    5387 start.go:364] duration metric: took 349.916µs to acquireMachinesLock for "kindnet-015000"
	I0829 12:05:24.991346    5387 start.go:93] Provisioning new machine with config: &{Name:kindnet-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kindnet-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:24.991676    5387 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:25.012204    5387 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:25.065333    5387 start.go:159] libmachine.API.Create for "kindnet-015000" (driver="qemu2")
	I0829 12:05:25.065382    5387 client.go:168] LocalClient.Create starting
	I0829 12:05:25.065503    5387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:25.065567    5387 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:25.065583    5387 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:25.065643    5387 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:25.065688    5387 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:25.065705    5387 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:25.066305    5387 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:25.236244    5387 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:25.294472    5387 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:25.294477    5387 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:25.294641    5387 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2
	I0829 12:05:25.303942    5387 main.go:141] libmachine: STDOUT: 
	I0829 12:05:25.303974    5387 main.go:141] libmachine: STDERR: 
	I0829 12:05:25.304035    5387 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2 +20000M
	I0829 12:05:25.312034    5387 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:25.312058    5387 main.go:141] libmachine: STDERR: 
	I0829 12:05:25.312070    5387 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2
	I0829 12:05:25.312075    5387 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:25.312085    5387 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:25.312121    5387 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:94:61:58:16:a7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kindnet-015000/disk.qcow2
	I0829 12:05:25.313746    5387 main.go:141] libmachine: STDOUT: 
	I0829 12:05:25.313761    5387 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:25.313773    5387 client.go:171] duration metric: took 248.597333ms to LocalClient.Create
	I0829 12:05:27.314404    5387 start.go:128] duration metric: took 2.324526292s to createHost
	I0829 12:05:27.314478    5387 start.go:83] releasing machines lock for "kindnet-015000", held for 2.325067833s
	W0829 12:05:27.314851    5387 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:27.331563    5387 out.go:201] 
	W0829 12:05:27.335620    5387 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:05:27.335647    5387 out.go:270] * 
	* 
	W0829 12:05:27.337970    5387 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:05:27.355530    5387 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.807113708s)

                                                
                                                
-- stdout --
	* [flannel-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-015000" primary control-plane node in "flannel-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:05:29.655891    5502 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:05:29.656019    5502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:29.656022    5502 out.go:358] Setting ErrFile to fd 2...
	I0829 12:05:29.656024    5502 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:29.656154    5502 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:05:29.657229    5502 out.go:352] Setting JSON to false
	I0829 12:05:29.673399    5502 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3893,"bootTime":1724954436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:05:29.673473    5502 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:05:29.680670    5502 out.go:177] * [flannel-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:05:29.689521    5502 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:05:29.689542    5502 notify.go:220] Checking for updates...
	I0829 12:05:29.695133    5502 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:05:29.698367    5502 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:05:29.701421    5502 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:05:29.704428    5502 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:05:29.707372    5502 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:05:29.710698    5502 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:29.710768    5502 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:29.710815    5502 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:05:29.715364    5502 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:05:29.722415    5502 start.go:297] selected driver: qemu2
	I0829 12:05:29.722425    5502 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:05:29.722433    5502 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:05:29.724786    5502 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:05:29.732461    5502 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:05:29.735469    5502 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:05:29.735509    5502 cni.go:84] Creating CNI manager for "flannel"
	I0829 12:05:29.735514    5502 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0829 12:05:29.735559    5502 start.go:340] cluster config:
	{Name:flannel-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:flannel-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:05:29.739548    5502 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:05:29.746355    5502 out.go:177] * Starting "flannel-015000" primary control-plane node in "flannel-015000" cluster
	I0829 12:05:29.750396    5502 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:05:29.750412    5502 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:05:29.750435    5502 cache.go:56] Caching tarball of preloaded images
	I0829 12:05:29.750513    5502 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:05:29.750519    5502 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:05:29.750585    5502 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/flannel-015000/config.json ...
	I0829 12:05:29.750596    5502 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/flannel-015000/config.json: {Name:mkf9c1ecd50c8f7893812866124e0075d21721ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:05:29.750830    5502 start.go:360] acquireMachinesLock for flannel-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:29.750866    5502 start.go:364] duration metric: took 30.083µs to acquireMachinesLock for "flannel-015000"
	I0829 12:05:29.750878    5502 start.go:93] Provisioning new machine with config: &{Name:flannel-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:29.750915    5502 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:29.759409    5502 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:29.779251    5502 start.go:159] libmachine.API.Create for "flannel-015000" (driver="qemu2")
	I0829 12:05:29.779277    5502 client.go:168] LocalClient.Create starting
	I0829 12:05:29.779344    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:29.779378    5502 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:29.779387    5502 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:29.779426    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:29.779449    5502 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:29.779457    5502 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:29.779819    5502 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:29.944712    5502 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:29.985164    5502 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:29.985169    5502 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:29.985342    5502 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2
	I0829 12:05:29.994672    5502 main.go:141] libmachine: STDOUT: 
	I0829 12:05:29.994689    5502 main.go:141] libmachine: STDERR: 
	I0829 12:05:29.994731    5502 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2 +20000M
	I0829 12:05:30.002664    5502 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:30.002687    5502 main.go:141] libmachine: STDERR: 
	I0829 12:05:30.002700    5502 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2
	I0829 12:05:30.002704    5502 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:30.002714    5502 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:30.002748    5502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:ac:1b:7b:01:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2
	I0829 12:05:30.004352    5502 main.go:141] libmachine: STDOUT: 
	I0829 12:05:30.004367    5502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:30.004385    5502 client.go:171] duration metric: took 225.2435ms to LocalClient.Create
	I0829 12:05:32.005384    5502 start.go:128] duration metric: took 2.255763709s to createHost
	I0829 12:05:32.005461    5502 start.go:83] releasing machines lock for "flannel-015000", held for 2.255901041s
	W0829 12:05:32.005541    5502 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:32.023784    5502 out.go:177] * Deleting "flannel-015000" in qemu2 ...
	W0829 12:05:32.057308    5502 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:32.057333    5502 start.go:729] Will try again in 5 seconds ...
	I0829 12:05:37.055418    5502 start.go:360] acquireMachinesLock for flannel-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:37.055940    5502 start.go:364] duration metric: took 417.75µs to acquireMachinesLock for "flannel-015000"
	I0829 12:05:37.056095    5502 start.go:93] Provisioning new machine with config: &{Name:flannel-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:flannel-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:37.056317    5502 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:37.067915    5502 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:37.120013    5502 start.go:159] libmachine.API.Create for "flannel-015000" (driver="qemu2")
	I0829 12:05:37.120078    5502 client.go:168] LocalClient.Create starting
	I0829 12:05:37.120196    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:37.120265    5502 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:37.120282    5502 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:37.120373    5502 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:37.120424    5502 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:37.120438    5502 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:37.120941    5502 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:37.293512    5502 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:37.362628    5502 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:37.362633    5502 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:37.362809    5502 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2
	I0829 12:05:37.372378    5502 main.go:141] libmachine: STDOUT: 
	I0829 12:05:37.372401    5502 main.go:141] libmachine: STDERR: 
	I0829 12:05:37.372451    5502 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2 +20000M
	I0829 12:05:37.380544    5502 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:37.380568    5502 main.go:141] libmachine: STDERR: 
	I0829 12:05:37.380579    5502 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2
	I0829 12:05:37.380584    5502 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:37.380592    5502 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:37.380621    5502 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:a8:1e:83:6d:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/flannel-015000/disk.qcow2
	I0829 12:05:37.382300    5502 main.go:141] libmachine: STDOUT: 
	I0829 12:05:37.382317    5502 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:37.382332    5502 client.go:171] duration metric: took 262.353458ms to LocalClient.Create
	I0829 12:05:39.383822    5502 start.go:128] duration metric: took 2.328351s to createHost
	I0829 12:05:39.383948    5502 start.go:83] releasing machines lock for "flannel-015000", held for 2.328823542s
	W0829 12:05:39.384323    5502 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:39.393830    5502 out.go:201] 
	W0829 12:05:39.402848    5502 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:05:39.402894    5502 out.go:270] * 
	* 
	W0829 12:05:39.405340    5502 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:05:39.415815    5502 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.924414667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-015000" primary control-plane node in "enable-default-cni-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:05:41.805426    5625 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:05:41.805559    5625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:41.805564    5625 out.go:358] Setting ErrFile to fd 2...
	I0829 12:05:41.805566    5625 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:41.805708    5625 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:05:41.806759    5625 out.go:352] Setting JSON to false
	I0829 12:05:41.822870    5625 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3905,"bootTime":1724954436,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:05:41.822939    5625 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:05:41.828417    5625 out.go:177] * [enable-default-cni-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:05:41.835300    5625 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:05:41.835381    5625 notify.go:220] Checking for updates...
	I0829 12:05:41.842353    5625 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:05:41.845323    5625 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:05:41.848420    5625 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:05:41.851398    5625 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:05:41.852867    5625 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:05:41.856750    5625 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:41.856822    5625 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:41.856873    5625 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:05:41.861392    5625 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:05:41.867335    5625 start.go:297] selected driver: qemu2
	I0829 12:05:41.867342    5625 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:05:41.867355    5625 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:05:41.869615    5625 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:05:41.872354    5625 out.go:177] * Automatically selected the socket_vmnet network
	E0829 12:05:41.875507    5625 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0829 12:05:41.875520    5625 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:05:41.875556    5625 cni.go:84] Creating CNI manager for "bridge"
	I0829 12:05:41.875560    5625 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:05:41.875589    5625 start.go:340] cluster config:
	{Name:enable-default-cni-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:05:41.879356    5625 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:05:41.887394    5625 out.go:177] * Starting "enable-default-cni-015000" primary control-plane node in "enable-default-cni-015000" cluster
	I0829 12:05:41.891347    5625 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:05:41.891365    5625 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:05:41.891378    5625 cache.go:56] Caching tarball of preloaded images
	I0829 12:05:41.891443    5625 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:05:41.891449    5625 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:05:41.891524    5625 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/enable-default-cni-015000/config.json ...
	I0829 12:05:41.891535    5625 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/enable-default-cni-015000/config.json: {Name:mk66ad1e9687b5bba8391100dceb14ca58ecfe2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:05:41.891751    5625 start.go:360] acquireMachinesLock for enable-default-cni-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:41.891786    5625 start.go:364] duration metric: took 28.208µs to acquireMachinesLock for "enable-default-cni-015000"
	I0829 12:05:41.891797    5625 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:41.891824    5625 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:41.899369    5625 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:41.917236    5625 start.go:159] libmachine.API.Create for "enable-default-cni-015000" (driver="qemu2")
	I0829 12:05:41.917271    5625 client.go:168] LocalClient.Create starting
	I0829 12:05:41.917337    5625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:41.917368    5625 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:41.917380    5625 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:41.917422    5625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:41.917447    5625 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:41.917454    5625 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:41.917847    5625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:42.079213    5625 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:42.127552    5625 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:42.127557    5625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:42.127726    5625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2
	I0829 12:05:42.136928    5625 main.go:141] libmachine: STDOUT: 
	I0829 12:05:42.136946    5625 main.go:141] libmachine: STDERR: 
	I0829 12:05:42.136992    5625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2 +20000M
	I0829 12:05:42.144826    5625 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:42.144851    5625 main.go:141] libmachine: STDERR: 
	I0829 12:05:42.144867    5625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2
	I0829 12:05:42.144873    5625 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:42.144885    5625 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:42.144910    5625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:3d:af:7a:58:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2
	I0829 12:05:42.146479    5625 main.go:141] libmachine: STDOUT: 
	I0829 12:05:42.146496    5625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:42.146513    5625 client.go:171] duration metric: took 229.305792ms to LocalClient.Create
	I0829 12:05:44.148128    5625 start.go:128] duration metric: took 2.256918834s to createHost
	I0829 12:05:44.148305    5625 start.go:83] releasing machines lock for "enable-default-cni-015000", held for 2.257043667s
	W0829 12:05:44.148375    5625 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:44.155699    5625 out.go:177] * Deleting "enable-default-cni-015000" in qemu2 ...
	W0829 12:05:44.186405    5625 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:44.186427    5625 start.go:729] Will try again in 5 seconds ...
	I0829 12:05:49.187542    5625 start.go:360] acquireMachinesLock for enable-default-cni-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:49.187987    5625 start.go:364] duration metric: took 377.375µs to acquireMachinesLock for "enable-default-cni-015000"
	I0829 12:05:49.188114    5625 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:enable-default-cni-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:49.188499    5625 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:49.196116    5625 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:49.250550    5625 start.go:159] libmachine.API.Create for "enable-default-cni-015000" (driver="qemu2")
	I0829 12:05:49.250590    5625 client.go:168] LocalClient.Create starting
	I0829 12:05:49.250707    5625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:49.250787    5625 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:49.250804    5625 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:49.250872    5625 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:49.250914    5625 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:49.250928    5625 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:49.251451    5625 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:49.423115    5625 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:49.624782    5625 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:49.624789    5625 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:49.624995    5625 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2
	I0829 12:05:49.635051    5625 main.go:141] libmachine: STDOUT: 
	I0829 12:05:49.635073    5625 main.go:141] libmachine: STDERR: 
	I0829 12:05:49.635132    5625 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2 +20000M
	I0829 12:05:49.643270    5625 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:49.643287    5625 main.go:141] libmachine: STDERR: 
	I0829 12:05:49.643305    5625 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2
	I0829 12:05:49.643310    5625 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:49.643319    5625 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:49.643346    5625 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:5c:73:c6:f6:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/enable-default-cni-015000/disk.qcow2
	I0829 12:05:49.644869    5625 main.go:141] libmachine: STDOUT: 
	I0829 12:05:49.644888    5625 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:49.644899    5625 client.go:171] duration metric: took 394.379834ms to LocalClient.Create
	I0829 12:05:51.646714    5625 start.go:128] duration metric: took 2.458621917s to createHost
	I0829 12:05:51.646779    5625 start.go:83] releasing machines lock for "enable-default-cni-015000", held for 2.459216208s
	W0829 12:05:51.647322    5625 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:51.664007    5625 out.go:201] 
	W0829 12:05:51.669034    5625 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:05:51.669067    5625 out.go:270] * 
	* 
	W0829 12:05:51.672073    5625 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:05:51.685860    5625 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.889635125s)

                                                
                                                
-- stdout --
	* [bridge-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-015000" primary control-plane node in "bridge-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:05:53.904377    5739 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:05:53.904524    5739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:53.904526    5739 out.go:358] Setting ErrFile to fd 2...
	I0829 12:05:53.904528    5739 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:05:53.904666    5739 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:05:53.905730    5739 out.go:352] Setting JSON to false
	I0829 12:05:53.921911    5739 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3917,"bootTime":1724954436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:05:53.921988    5739 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:05:53.928861    5739 out.go:177] * [bridge-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:05:53.937710    5739 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:05:53.937766    5739 notify.go:220] Checking for updates...
	I0829 12:05:53.945591    5739 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:05:53.948601    5739 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:05:53.951635    5739 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:05:53.954638    5739 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:05:53.957633    5739 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:05:53.960968    5739 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:53.961033    5739 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:05:53.961081    5739 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:05:53.965624    5739 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:05:53.972645    5739 start.go:297] selected driver: qemu2
	I0829 12:05:53.972656    5739 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:05:53.972662    5739 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:05:53.975028    5739 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:05:53.978569    5739 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:05:53.981709    5739 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:05:53.981766    5739 cni.go:84] Creating CNI manager for "bridge"
	I0829 12:05:53.981774    5739 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:05:53.981808    5739 start.go:340] cluster config:
	{Name:bridge-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:bridge-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:05:53.985701    5739 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:05:53.991590    5739 out.go:177] * Starting "bridge-015000" primary control-plane node in "bridge-015000" cluster
	I0829 12:05:53.995642    5739 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:05:53.995658    5739 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:05:53.995670    5739 cache.go:56] Caching tarball of preloaded images
	I0829 12:05:53.995741    5739 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:05:53.995747    5739 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:05:53.995814    5739 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/bridge-015000/config.json ...
	I0829 12:05:53.995826    5739 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/bridge-015000/config.json: {Name:mk333904299aafa200cce2b76c738c183e4409fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:05:53.996049    5739 start.go:360] acquireMachinesLock for bridge-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:05:53.996085    5739 start.go:364] duration metric: took 29.25µs to acquireMachinesLock for "bridge-015000"
	I0829 12:05:53.996098    5739 start.go:93] Provisioning new machine with config: &{Name:bridge-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:05:53.996125    5739 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:05:54.003635    5739 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:05:54.022618    5739 start.go:159] libmachine.API.Create for "bridge-015000" (driver="qemu2")
	I0829 12:05:54.022647    5739 client.go:168] LocalClient.Create starting
	I0829 12:05:54.022721    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:05:54.022753    5739 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:54.022762    5739 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:54.022801    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:05:54.022828    5739 main.go:141] libmachine: Decoding PEM data...
	I0829 12:05:54.022841    5739 main.go:141] libmachine: Parsing certificate...
	I0829 12:05:54.023198    5739 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:05:54.182390    5739 main.go:141] libmachine: Creating SSH key...
	I0829 12:05:54.263689    5739 main.go:141] libmachine: Creating Disk image...
	I0829 12:05:54.263694    5739 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:05:54.263866    5739 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2
	I0829 12:05:54.273381    5739 main.go:141] libmachine: STDOUT: 
	I0829 12:05:54.273402    5739 main.go:141] libmachine: STDERR: 
	I0829 12:05:54.273450    5739 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2 +20000M
	I0829 12:05:54.281448    5739 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:05:54.281463    5739 main.go:141] libmachine: STDERR: 
	I0829 12:05:54.281475    5739 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2
	I0829 12:05:54.281478    5739 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:05:54.281490    5739 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:05:54.281512    5739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:47:6f:fb:79:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2
	I0829 12:05:54.283120    5739 main.go:141] libmachine: STDOUT: 
	I0829 12:05:54.283145    5739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:05:54.283170    5739 client.go:171] duration metric: took 260.556292ms to LocalClient.Create
	I0829 12:05:56.285065    5739 start.go:128] duration metric: took 2.289236416s to createHost
	I0829 12:05:56.285113    5739 start.go:83] releasing machines lock for "bridge-015000", held for 2.28934175s
	W0829 12:05:56.285185    5739 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:56.302316    5739 out.go:177] * Deleting "bridge-015000" in qemu2 ...
	W0829 12:05:56.334869    5739 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:05:56.334896    5739 start.go:729] Will try again in 5 seconds ...
	I0829 12:06:01.336662    5739 start.go:360] acquireMachinesLock for bridge-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:01.337141    5739 start.go:364] duration metric: took 370.583µs to acquireMachinesLock for "bridge-015000"
	I0829 12:06:01.337270    5739 start.go:93] Provisioning new machine with config: &{Name:bridge-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.0 ClusterName:bridge-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:01.337580    5739 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:01.347288    5739 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:06:01.399700    5739 start.go:159] libmachine.API.Create for "bridge-015000" (driver="qemu2")
	I0829 12:06:01.399750    5739 client.go:168] LocalClient.Create starting
	I0829 12:06:01.399868    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:01.399938    5739 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:01.399956    5739 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:01.400038    5739 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:01.400085    5739 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:01.400099    5739 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:01.400635    5739 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:01.575567    5739 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:01.696280    5739 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:01.696286    5739 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:01.696457    5739 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2
	I0829 12:06:01.706034    5739 main.go:141] libmachine: STDOUT: 
	I0829 12:06:01.706051    5739 main.go:141] libmachine: STDERR: 
	I0829 12:06:01.706095    5739 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2 +20000M
	I0829 12:06:01.714029    5739 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:01.714046    5739 main.go:141] libmachine: STDERR: 
	I0829 12:06:01.714057    5739 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2
	I0829 12:06:01.714062    5739 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:01.714068    5739 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:01.714093    5739 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:ab:4a:ac:4d:dd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/bridge-015000/disk.qcow2
	I0829 12:06:01.715743    5739 main.go:141] libmachine: STDOUT: 
	I0829 12:06:01.715765    5739 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:01.715778    5739 client.go:171] duration metric: took 316.054834ms to LocalClient.Create
	I0829 12:06:03.717761    5739 start.go:128] duration metric: took 2.380357583s to createHost
	I0829 12:06:03.717850    5739 start.go:83] releasing machines lock for "bridge-015000", held for 2.380914542s
	W0829 12:06:03.718245    5739 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:03.727906    5739 out.go:201] 
	W0829 12:06:03.738015    5739 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:03.738038    5739 out.go:270] * 
	* 
	W0829 12:06:03.740524    5739 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:06:03.750888    5739 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-015000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.894629916s)

                                                
                                                
-- stdout --
	* [kubenet-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-015000" primary control-plane node in "kubenet-015000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-015000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:05.961289    5849 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:05.961413    5849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:05.961420    5849 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:05.961422    5849 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:05.961557    5849 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:05.962621    5849 out.go:352] Setting JSON to false
	I0829 12:06:05.978677    5849 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3929,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:06:05.978747    5849 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:06:05.986121    5849 out.go:177] * [kubenet-015000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:06:05.994954    5849 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:06:05.994995    5849 notify.go:220] Checking for updates...
	I0829 12:06:06.001859    5849 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:06:06.004892    5849 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:06:06.007776    5849 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:06:06.010842    5849 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:06:06.013873    5849 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:06:06.015884    5849 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:06.015955    5849 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:06.016006    5849 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:06:06.020908    5849 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:06:06.027758    5849 start.go:297] selected driver: qemu2
	I0829 12:06:06.027764    5849 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:06:06.027771    5849 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:06:06.029977    5849 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:06:06.032839    5849 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:06:06.037075    5849 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:06:06.037094    5849 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0829 12:06:06.037134    5849 start.go:340] cluster config:
	{Name:kubenet-015000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:kubenet-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:06.040969    5849 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:06.049868    5849 out.go:177] * Starting "kubenet-015000" primary control-plane node in "kubenet-015000" cluster
	I0829 12:06:06.053851    5849 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:06:06.053872    5849 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:06:06.053885    5849 cache.go:56] Caching tarball of preloaded images
	I0829 12:06:06.053947    5849 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:06:06.053953    5849 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:06:06.054015    5849 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/kubenet-015000/config.json ...
	I0829 12:06:06.054027    5849 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/kubenet-015000/config.json: {Name:mkfd2604a4b52a53d39b2a6f9796f717edab8f8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:06:06.054269    5849 start.go:360] acquireMachinesLock for kubenet-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:06.054306    5849 start.go:364] duration metric: took 30.625µs to acquireMachinesLock for "kubenet-015000"
	I0829 12:06:06.054318    5849 start.go:93] Provisioning new machine with config: &{Name:kubenet-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:06.054354    5849 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:06.063825    5849 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:06:06.082814    5849 start.go:159] libmachine.API.Create for "kubenet-015000" (driver="qemu2")
	I0829 12:06:06.082843    5849 client.go:168] LocalClient.Create starting
	I0829 12:06:06.082911    5849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:06.082943    5849 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:06.082961    5849 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:06.083001    5849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:06.083025    5849 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:06.083034    5849 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:06.083403    5849 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:06.244193    5849 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:06.393331    5849 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:06.393338    5849 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:06.393525    5849 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2
	I0829 12:06:06.403305    5849 main.go:141] libmachine: STDOUT: 
	I0829 12:06:06.403326    5849 main.go:141] libmachine: STDERR: 
	I0829 12:06:06.403394    5849 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2 +20000M
	I0829 12:06:06.411427    5849 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:06.411445    5849 main.go:141] libmachine: STDERR: 
	I0829 12:06:06.411461    5849 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2
	I0829 12:06:06.411466    5849 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:06.411479    5849 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:06.411506    5849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a4:10:0d:0c:f0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2
	I0829 12:06:06.413178    5849 main.go:141] libmachine: STDOUT: 
	I0829 12:06:06.413195    5849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:06.413216    5849 client.go:171] duration metric: took 330.391375ms to LocalClient.Create
	I0829 12:06:08.415240    5849 start.go:128] duration metric: took 2.361048333s to createHost
	I0829 12:06:08.415293    5849 start.go:83] releasing machines lock for "kubenet-015000", held for 2.361161708s
	W0829 12:06:08.415356    5849 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:08.425684    5849 out.go:177] * Deleting "kubenet-015000" in qemu2 ...
	W0829 12:06:08.465436    5849 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:08.465461    5849 start.go:729] Will try again in 5 seconds ...
	I0829 12:06:13.467422    5849 start.go:360] acquireMachinesLock for kubenet-015000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:13.467868    5849 start.go:364] duration metric: took 341.791µs to acquireMachinesLock for "kubenet-015000"
	I0829 12:06:13.468008    5849 start.go:93] Provisioning new machine with config: &{Name:kubenet-015000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:kubenet-015000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:13.468314    5849 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:13.475853    5849 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0829 12:06:13.527849    5849 start.go:159] libmachine.API.Create for "kubenet-015000" (driver="qemu2")
	I0829 12:06:13.527898    5849 client.go:168] LocalClient.Create starting
	I0829 12:06:13.528000    5849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:13.528059    5849 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:13.528077    5849 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:13.528135    5849 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:13.528187    5849 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:13.528197    5849 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:13.528877    5849 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:13.700224    5849 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:13.754814    5849 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:13.754820    5849 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:13.754987    5849 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2
	I0829 12:06:13.764245    5849 main.go:141] libmachine: STDOUT: 
	I0829 12:06:13.764264    5849 main.go:141] libmachine: STDERR: 
	I0829 12:06:13.764322    5849 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2 +20000M
	I0829 12:06:13.772467    5849 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:13.772487    5849 main.go:141] libmachine: STDERR: 
	I0829 12:06:13.772502    5849 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2
	I0829 12:06:13.772506    5849 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:13.772513    5849 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:13.772544    5849 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:bc:55:52:0e:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/kubenet-015000/disk.qcow2
	I0829 12:06:13.774203    5849 main.go:141] libmachine: STDOUT: 
	I0829 12:06:13.774218    5849 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:13.774230    5849 client.go:171] duration metric: took 246.340125ms to LocalClient.Create
	I0829 12:06:15.776324    5849 start.go:128] duration metric: took 2.308107708s to createHost
	I0829 12:06:15.776385    5849 start.go:83] releasing machines lock for "kubenet-015000", held for 2.308624542s
	W0829 12:06:15.776901    5849 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-015000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:15.791676    5849 out.go:201] 
	W0829 12:06:15.795761    5849 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:15.795822    5849 out.go:270] * 
	* 
	W0829 12:06:15.798657    5849 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:06:15.813604    5849 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-225000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-225000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.94231175s)

                                                
                                                
-- stdout --
	* [old-k8s-version-225000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-225000" primary control-plane node in "old-k8s-version-225000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-225000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:18.004358    5961 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:18.004483    5961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:18.004485    5961 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:18.004487    5961 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:18.004615    5961 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:18.005711    5961 out.go:352] Setting JSON to false
	I0829 12:06:18.021627    5961 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3942,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:06:18.021705    5961 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:06:18.027653    5961 out.go:177] * [old-k8s-version-225000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:06:18.035533    5961 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:06:18.035576    5961 notify.go:220] Checking for updates...
	I0829 12:06:18.042521    5961 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:06:18.045545    5961 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:06:18.048498    5961 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:06:18.051501    5961 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:06:18.054537    5961 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:06:18.057926    5961 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:18.057994    5961 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:18.058046    5961 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:06:18.061464    5961 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:06:18.068445    5961 start.go:297] selected driver: qemu2
	I0829 12:06:18.068451    5961 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:06:18.068457    5961 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:06:18.070677    5961 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:06:18.074402    5961 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:06:18.077578    5961 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:06:18.077618    5961 cni.go:84] Creating CNI manager for ""
	I0829 12:06:18.077630    5961 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 12:06:18.077667    5961 start.go:340] cluster config:
	{Name:old-k8s-version-225000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:18.081379    5961 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:18.090547    5961 out.go:177] * Starting "old-k8s-version-225000" primary control-plane node in "old-k8s-version-225000" cluster
	I0829 12:06:18.093514    5961 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 12:06:18.093532    5961 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 12:06:18.093546    5961 cache.go:56] Caching tarball of preloaded images
	I0829 12:06:18.093626    5961 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:06:18.093632    5961 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 12:06:18.093693    5961 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/old-k8s-version-225000/config.json ...
	I0829 12:06:18.093705    5961 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/old-k8s-version-225000/config.json: {Name:mk3a7128efcd09617036343f950e80fcdd5b0f90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:06:18.093944    5961 start.go:360] acquireMachinesLock for old-k8s-version-225000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:18.093982    5961 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "old-k8s-version-225000"
	I0829 12:06:18.093993    5961 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:18.094027    5961 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:18.102418    5961 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:06:18.121298    5961 start.go:159] libmachine.API.Create for "old-k8s-version-225000" (driver="qemu2")
	I0829 12:06:18.121331    5961 client.go:168] LocalClient.Create starting
	I0829 12:06:18.121405    5961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:18.121440    5961 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:18.121449    5961 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:18.121486    5961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:18.121510    5961 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:18.121518    5961 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:18.121907    5961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:18.282450    5961 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:18.421961    5961 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:18.421971    5961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:18.422471    5961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:18.432038    5961 main.go:141] libmachine: STDOUT: 
	I0829 12:06:18.432057    5961 main.go:141] libmachine: STDERR: 
	I0829 12:06:18.432108    5961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2 +20000M
	I0829 12:06:18.440030    5961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:18.440047    5961 main.go:141] libmachine: STDERR: 
	I0829 12:06:18.440061    5961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:18.440064    5961 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:18.440076    5961 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:18.440104    5961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a1:ac:a4:d4:6c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:18.441694    5961 main.go:141] libmachine: STDOUT: 
	I0829 12:06:18.441711    5961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:18.441732    5961 client.go:171] duration metric: took 320.409584ms to LocalClient.Create
	I0829 12:06:20.443816    5961 start.go:128] duration metric: took 2.349882833s to createHost
	I0829 12:06:20.443880    5961 start.go:83] releasing machines lock for "old-k8s-version-225000", held for 2.350002916s
	W0829 12:06:20.443935    5961 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:20.455121    5961 out.go:177] * Deleting "old-k8s-version-225000" in qemu2 ...
	W0829 12:06:20.494971    5961 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:20.494993    5961 start.go:729] Will try again in 5 seconds ...
	I0829 12:06:25.497000    5961 start.go:360] acquireMachinesLock for old-k8s-version-225000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:25.497461    5961 start.go:364] duration metric: took 353.333µs to acquireMachinesLock for "old-k8s-version-225000"
	I0829 12:06:25.497605    5961 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:25.497979    5961 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:25.517767    5961 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:06:25.570457    5961 start.go:159] libmachine.API.Create for "old-k8s-version-225000" (driver="qemu2")
	I0829 12:06:25.570509    5961 client.go:168] LocalClient.Create starting
	I0829 12:06:25.570630    5961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:25.570699    5961 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:25.570717    5961 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:25.570782    5961 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:25.570826    5961 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:25.570839    5961 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:25.571402    5961 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:25.743635    5961 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:25.844087    5961 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:25.844092    5961 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:25.844266    5961 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:25.853834    5961 main.go:141] libmachine: STDOUT: 
	I0829 12:06:25.853854    5961 main.go:141] libmachine: STDERR: 
	I0829 12:06:25.853922    5961 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2 +20000M
	I0829 12:06:25.861836    5961 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:25.861859    5961 main.go:141] libmachine: STDERR: 
	I0829 12:06:25.861871    5961 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:25.861875    5961 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:25.861885    5961 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:25.861918    5961 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:de:27:c1:0d:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:25.863525    5961 main.go:141] libmachine: STDOUT: 
	I0829 12:06:25.863543    5961 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:25.863563    5961 client.go:171] duration metric: took 293.051083ms to LocalClient.Create
	I0829 12:06:27.865660    5961 start.go:128] duration metric: took 2.367750333s to createHost
	I0829 12:06:27.865742    5961 start.go:83] releasing machines lock for "old-k8s-version-225000", held for 2.368350375s
	W0829 12:06:27.866145    5961 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-225000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:27.882658    5961 out.go:201] 
	W0829 12:06:27.886998    5961 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:27.887064    5961 out.go:270] * 
	* 
	W0829 12:06:27.889890    5961 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:06:27.904743    5961 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-225000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (68.576416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-225000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-225000 create -f testdata/busybox.yaml: exit status 1 (29.122167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-225000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-225000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (30.187792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (29.382125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-225000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-225000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-225000 describe deploy/metrics-server -n kube-system: exit status 1 (26.4605ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-225000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-225000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (29.571459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-225000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-225000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.191152083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-225000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-225000" primary control-plane node in "old-k8s-version-225000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-225000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:31.781690    6009 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:31.781823    6009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:31.781827    6009 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:31.781829    6009 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:31.781947    6009 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:31.782998    6009 out.go:352] Setting JSON to false
	I0829 12:06:31.799421    6009 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3955,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:06:31.799485    6009 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:06:31.804190    6009 out.go:177] * [old-k8s-version-225000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:06:31.811130    6009 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:06:31.811169    6009 notify.go:220] Checking for updates...
	I0829 12:06:31.818165    6009 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:06:31.821139    6009 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:06:31.824155    6009 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:06:31.827140    6009 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:06:31.830079    6009 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:06:31.833478    6009 config.go:182] Loaded profile config "old-k8s-version-225000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0829 12:06:31.837129    6009 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0829 12:06:31.840133    6009 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:06:31.844117    6009 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 12:06:31.850184    6009 start.go:297] selected driver: qemu2
	I0829 12:06:31.850191    6009 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-225000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:31.850260    6009 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:06:31.852806    6009 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:06:31.852847    6009 cni.go:84] Creating CNI manager for ""
	I0829 12:06:31.852855    6009 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 12:06:31.852877    6009 start.go:340] cluster config:
	{Name:old-k8s-version-225000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-225000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:31.856709    6009 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:31.865135    6009 out.go:177] * Starting "old-k8s-version-225000" primary control-plane node in "old-k8s-version-225000" cluster
	I0829 12:06:31.869146    6009 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 12:06:31.869163    6009 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 12:06:31.869179    6009 cache.go:56] Caching tarball of preloaded images
	I0829 12:06:31.869248    6009 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:06:31.869254    6009 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 12:06:31.869305    6009 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/old-k8s-version-225000/config.json ...
	I0829 12:06:31.869848    6009 start.go:360] acquireMachinesLock for old-k8s-version-225000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:31.869879    6009 start.go:364] duration metric: took 23.75µs to acquireMachinesLock for "old-k8s-version-225000"
	I0829 12:06:31.869887    6009 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:06:31.869897    6009 fix.go:54] fixHost starting: 
	I0829 12:06:31.870014    6009 fix.go:112] recreateIfNeeded on old-k8s-version-225000: state=Stopped err=<nil>
	W0829 12:06:31.870023    6009 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:06:31.873137    6009 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-225000" ...
	I0829 12:06:31.881196    6009 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:31.881242    6009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:de:27:c1:0d:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:31.883302    6009 main.go:141] libmachine: STDOUT: 
	I0829 12:06:31.883330    6009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:31.883359    6009 fix.go:56] duration metric: took 13.465584ms for fixHost
	I0829 12:06:31.883364    6009 start.go:83] releasing machines lock for "old-k8s-version-225000", held for 13.481625ms
	W0829 12:06:31.883370    6009 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:31.883401    6009 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:31.883405    6009 start.go:729] Will try again in 5 seconds ...
	I0829 12:06:36.885443    6009 start.go:360] acquireMachinesLock for old-k8s-version-225000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:36.885999    6009 start.go:364] duration metric: took 420.875µs to acquireMachinesLock for "old-k8s-version-225000"
	I0829 12:06:36.886236    6009 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:06:36.886256    6009 fix.go:54] fixHost starting: 
	I0829 12:06:36.887128    6009 fix.go:112] recreateIfNeeded on old-k8s-version-225000: state=Stopped err=<nil>
	W0829 12:06:36.887156    6009 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:06:36.895560    6009 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-225000" ...
	I0829 12:06:36.898543    6009 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:36.898773    6009 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:de:27:c1:0d:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/old-k8s-version-225000/disk.qcow2
	I0829 12:06:36.907700    6009 main.go:141] libmachine: STDOUT: 
	I0829 12:06:36.907781    6009 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:36.907862    6009 fix.go:56] duration metric: took 21.606792ms for fixHost
	I0829 12:06:36.907890    6009 start.go:83] releasing machines lock for "old-k8s-version-225000", held for 21.77975ms
	W0829 12:06:36.908113    6009 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-225000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-225000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:36.916620    6009 out.go:201] 
	W0829 12:06:36.920641    6009 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:36.920665    6009 out.go:270] * 
	* 
	W0829 12:06:36.923115    6009 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:06:36.930488    6009 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-225000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (71.223208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-225000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (33.077584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-225000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-225000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-225000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (28.289959ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-225000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-225000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (29.657375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-225000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (30.210291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-225000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-225000 --alsologtostderr -v=1: exit status 83 (41.057625ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-225000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-225000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:37.208192    6028 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:37.208573    6028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:37.208577    6028 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:37.208580    6028 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:37.208736    6028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:37.208936    6028 out.go:352] Setting JSON to false
	I0829 12:06:37.208944    6028 mustload.go:65] Loading cluster: old-k8s-version-225000
	I0829 12:06:37.209135    6028 config.go:182] Loaded profile config "old-k8s-version-225000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0829 12:06:37.213731    6028 out.go:177] * The control-plane node old-k8s-version-225000 host is not running: state=Stopped
	I0829 12:06:37.216809    6028 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-225000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-225000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (30.288041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (29.1905ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-225000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.964915s)

                                                
                                                
-- stdout --
	* [no-preload-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-622000" primary control-plane node in "no-preload-622000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-622000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:37.525979    6045 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:37.526106    6045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:37.526109    6045 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:37.526111    6045 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:37.526240    6045 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:37.527283    6045 out.go:352] Setting JSON to false
	I0829 12:06:37.543577    6045 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3961,"bootTime":1724954436,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:06:37.543643    6045 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:06:37.548850    6045 out.go:177] * [no-preload-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:06:37.554716    6045 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:06:37.554763    6045 notify.go:220] Checking for updates...
	I0829 12:06:37.562701    6045 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:06:37.565721    6045 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:06:37.568759    6045 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:06:37.571654    6045 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:06:37.574760    6045 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:06:37.578093    6045 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:37.578155    6045 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:37.578200    6045 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:06:37.581688    6045 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:06:37.588789    6045 start.go:297] selected driver: qemu2
	I0829 12:06:37.588796    6045 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:06:37.588809    6045 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:06:37.591196    6045 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:06:37.593704    6045 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:06:37.597781    6045 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:06:37.597804    6045 cni.go:84] Creating CNI manager for ""
	I0829 12:06:37.597811    6045 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:06:37.597815    6045 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:06:37.597842    6045 start.go:340] cluster config:
	{Name:no-preload-622000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:37.601460    6045 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.609675    6045 out.go:177] * Starting "no-preload-622000" primary control-plane node in "no-preload-622000" cluster
	I0829 12:06:37.613729    6045 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:06:37.613806    6045 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/no-preload-622000/config.json ...
	I0829 12:06:37.613822    6045 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/no-preload-622000/config.json: {Name:mka3cce95a609066854841ba38240da4624d73f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:06:37.613817    6045 cache.go:107] acquiring lock: {Name:mk6b07a564e4863994ca9d7d373f831b08786989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.613823    6045 cache.go:107] acquiring lock: {Name:mk43611890887523ca89f123aa3a4398077d7dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.613828    6045 cache.go:107] acquiring lock: {Name:mk4717646786c3c098c8fff794ea9ba6b7b76be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.613845    6045 cache.go:107] acquiring lock: {Name:mkd1a8bbd08ffec89ecf39c254bbf48917389ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.613977    6045 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 12:06:37.613991    6045 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I0829 12:06:37.613988    6045 cache.go:107] acquiring lock: {Name:mkf99fa534400845dd620d78d1618039b5f8173e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.613989    6045 cache.go:107] acquiring lock: {Name:mkbc05a502353c451ed23060a553d1a1ab4791e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.613974    6045 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 12:06:37.614101    6045 cache.go:107] acquiring lock: {Name:mk9d1c19883e9cc15ef5a0067e0406fbe3972946 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.614143    6045 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 12:06:37.614126    6045 cache.go:107] acquiring lock: {Name:mk383777349a948a5b04672f641e073908c246ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:37.614270    6045 start.go:360] acquireMachinesLock for no-preload-622000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:37.614288    6045 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I0829 12:06:37.614312    6045 start.go:364] duration metric: took 36.25µs to acquireMachinesLock for "no-preload-622000"
	I0829 12:06:37.614346    6045 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 12:06:37.614349    6045 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 12:06:37.614362    6045 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 12:06:37.614323    6045 start.go:93] Provisioning new machine with config: &{Name:no-preload-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:37.614371    6045 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:37.617798    6045 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:06:37.628282    6045 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.0
	I0829 12:06:37.628310    6045 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I0829 12:06:37.628377    6045 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.0
	I0829 12:06:37.628414    6045 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0829 12:06:37.628532    6045 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0829 12:06:37.628623    6045 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.0
	I0829 12:06:37.628707    6045 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.0
	I0829 12:06:37.628982    6045 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I0829 12:06:37.636295    6045 start.go:159] libmachine.API.Create for "no-preload-622000" (driver="qemu2")
	I0829 12:06:37.636319    6045 client.go:168] LocalClient.Create starting
	I0829 12:06:37.636386    6045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:37.636416    6045 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:37.636428    6045 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:37.636484    6045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:37.636507    6045 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:37.636522    6045 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:37.636907    6045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:37.800059    6045 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:37.909273    6045 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:37.909367    6045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:37.909606    6045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:37.919594    6045 main.go:141] libmachine: STDOUT: 
	I0829 12:06:37.919618    6045 main.go:141] libmachine: STDERR: 
	I0829 12:06:37.919678    6045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2 +20000M
	I0829 12:06:37.929126    6045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:37.929144    6045 main.go:141] libmachine: STDERR: 
	I0829 12:06:37.929158    6045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:37.929162    6045 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:37.929175    6045 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:37.929202    6045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:d7:8e:94:4a:41 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:37.931253    6045 main.go:141] libmachine: STDOUT: 
	I0829 12:06:37.931270    6045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:37.931289    6045 client.go:171] duration metric: took 294.974834ms to LocalClient.Create
	I0829 12:06:38.626722    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0829 12:06:38.661361    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0
	I0829 12:06:38.681126    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I0829 12:06:38.718198    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0
	I0829 12:06:38.832613    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0
	I0829 12:06:38.850379    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I0829 12:06:38.903680    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0
	I0829 12:06:38.962559    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0829 12:06:38.962639    6045 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 1.348826917s
	I0829 12:06:38.962689    6045 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	W0829 12:06:38.992523    6045 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0829 12:06:38.992618    6045 cache.go:162] opening:  /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0829 12:06:39.931451    6045 start.go:128] duration metric: took 2.31712675s to createHost
	I0829 12:06:39.931504    6045 start.go:83] releasing machines lock for "no-preload-622000", held for 2.317255833s
	W0829 12:06:39.931574    6045 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:39.949438    6045 out.go:177] * Deleting "no-preload-622000" in qemu2 ...
	I0829 12:06:39.969659    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0829 12:06:39.969701    6045 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.355949791s
	I0829 12:06:39.969719    6045 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	W0829 12:06:39.975948    6045 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:39.975975    6045 start.go:729] Will try again in 5 seconds ...
	I0829 12:06:42.089218    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0829 12:06:42.089278    6045 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.4753505s
	I0829 12:06:42.089305    6045 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0829 12:06:42.292421    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0829 12:06:42.292482    6045 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 4.67865275s
	I0829 12:06:42.292533    6045 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0829 12:06:42.397219    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0829 12:06:42.397265    6045 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 4.783306041s
	I0829 12:06:42.397297    6045 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0829 12:06:42.453021    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0829 12:06:42.453056    6045 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 4.839386s
	I0829 12:06:42.453084    6045 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0829 12:06:42.820179    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0829 12:06:42.820226    6045 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 5.20655675s
	I0829 12:06:42.820253    6045 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0829 12:06:44.977640    6045 start.go:360] acquireMachinesLock for no-preload-622000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:44.978146    6045 start.go:364] duration metric: took 421.292µs to acquireMachinesLock for "no-preload-622000"
	I0829 12:06:44.978291    6045 start.go:93] Provisioning new machine with config: &{Name:no-preload-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:no-preload-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:44.978545    6045 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:44.989209    6045 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:06:45.041926    6045 start.go:159] libmachine.API.Create for "no-preload-622000" (driver="qemu2")
	I0829 12:06:45.041964    6045 client.go:168] LocalClient.Create starting
	I0829 12:06:45.042062    6045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:45.042120    6045 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:45.042140    6045 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:45.042207    6045 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:45.042245    6045 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:45.042257    6045 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:45.042702    6045 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:45.261033    6045 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:45.390784    6045 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:45.390789    6045 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:45.390970    6045 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:45.400075    6045 cache.go:157] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0829 12:06:45.400102    6045 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.78635025s
	I0829 12:06:45.400110    6045 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0829 12:06:45.400126    6045 cache.go:87] Successfully saved all images to host disk.
	I0829 12:06:45.402223    6045 main.go:141] libmachine: STDOUT: 
	I0829 12:06:45.402233    6045 main.go:141] libmachine: STDERR: 
	I0829 12:06:45.402284    6045 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2 +20000M
	I0829 12:06:45.410620    6045 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:45.410637    6045 main.go:141] libmachine: STDERR: 
	I0829 12:06:45.410647    6045 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:45.410654    6045 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:45.410665    6045 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:45.410698    6045 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:0c:62:c7:d9:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:45.412393    6045 main.go:141] libmachine: STDOUT: 
	I0829 12:06:45.412415    6045 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:45.412429    6045 client.go:171] duration metric: took 370.470583ms to LocalClient.Create
	I0829 12:06:47.414608    6045 start.go:128] duration metric: took 2.436062625s to createHost
	I0829 12:06:47.414693    6045 start.go:83] releasing machines lock for "no-preload-622000", held for 2.436584125s
	W0829 12:06:47.415187    6045 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-622000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-622000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:47.433141    6045 out.go:201] 
	W0829 12:06:47.437427    6045 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:47.437478    6045 out.go:270] * 
	* 
	W0829 12:06:47.438725    6045 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:06:47.450923    6045 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (57.534791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-622000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-622000 create -f testdata/busybox.yaml: exit status 1 (29.690375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-622000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-622000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (29.787625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (29.232875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-622000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-622000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-622000 describe deploy/metrics-server -n kube-system: exit status 1 (27.516666ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-622000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-622000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (29.648334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.18157125s)

                                                
                                                
-- stdout --
	* [no-preload-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-622000" primary control-plane node in "no-preload-622000" cluster
	* Restarting existing qemu2 VM for "no-preload-622000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-622000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:50.814349    6128 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:50.814471    6128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:50.814474    6128 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:50.814476    6128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:50.814611    6128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:50.815613    6128 out.go:352] Setting JSON to false
	I0829 12:06:50.832145    6128 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3974,"bootTime":1724954436,"procs":466,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:06:50.832219    6128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:06:50.834114    6128 out.go:177] * [no-preload-622000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:06:50.841401    6128 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:06:50.841447    6128 notify.go:220] Checking for updates...
	I0829 12:06:50.849284    6128 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:06:50.852437    6128 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:06:50.855441    6128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:06:50.858383    6128 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:06:50.861438    6128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:06:50.864558    6128 config.go:182] Loaded profile config "no-preload-622000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:50.864830    6128 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:06:50.869451    6128 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 12:06:50.876374    6128 start.go:297] selected driver: qemu2
	I0829 12:06:50.876380    6128 start.go:901] validating driver "qemu2" against &{Name:no-preload-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:no-preload-622000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:50.876437    6128 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:06:50.878671    6128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:06:50.878716    6128 cni.go:84] Creating CNI manager for ""
	I0829 12:06:50.878723    6128 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:06:50.878744    6128 start.go:340] cluster config:
	{Name:no-preload-622000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-622000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:50.882331    6128 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.890414    6128 out.go:177] * Starting "no-preload-622000" primary control-plane node in "no-preload-622000" cluster
	I0829 12:06:50.894415    6128 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:06:50.894473    6128 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/no-preload-622000/config.json ...
	I0829 12:06:50.894495    6128 cache.go:107] acquiring lock: {Name:mk43611890887523ca89f123aa3a4398077d7dd3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894493    6128 cache.go:107] acquiring lock: {Name:mk4717646786c3c098c8fff794ea9ba6b7b76be3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894516    6128 cache.go:107] acquiring lock: {Name:mk6b07a564e4863994ca9d7d373f831b08786989 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894550    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0829 12:06:50.894554    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0829 12:06:50.894558    6128 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 61.583µs
	I0829 12:06:50.894559    6128 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 72.125µs
	I0829 12:06:50.894563    6128 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0829 12:06:50.894563    6128 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0829 12:06:50.894568    6128 cache.go:107] acquiring lock: {Name:mkbc05a502353c451ed23060a553d1a1ab4791e8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894571    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0829 12:06:50.894578    6128 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 76.166µs
	I0829 12:06:50.894576    6128 cache.go:107] acquiring lock: {Name:mkf99fa534400845dd620d78d1618039b5f8173e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894582    6128 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0829 12:06:50.894588    6128 cache.go:107] acquiring lock: {Name:mk383777349a948a5b04672f641e073908c246ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894602    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0829 12:06:50.894606    6128 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 38.75µs
	I0829 12:06:50.894611    6128 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0829 12:06:50.894612    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0829 12:06:50.894617    6128 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 41.375µs
	I0829 12:06:50.894621    6128 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0829 12:06:50.894624    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0829 12:06:50.894628    6128 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 40.583µs
	I0829 12:06:50.894631    6128 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0829 12:06:50.894666    6128 cache.go:107] acquiring lock: {Name:mkd1a8bbd08ffec89ecf39c254bbf48917389ca6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894673    6128 cache.go:107] acquiring lock: {Name:mk9d1c19883e9cc15ef5a0067e0406fbe3972946 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:50.894716    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0829 12:06:50.894720    6128 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 75.75µs
	I0829 12:06:50.894727    6128 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0829 12:06:50.894730    6128 cache.go:115] /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0829 12:06:50.894736    6128 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 80.292µs
	I0829 12:06:50.894740    6128 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0829 12:06:50.894744    6128 cache.go:87] Successfully saved all images to host disk.
	I0829 12:06:50.894857    6128 start.go:360] acquireMachinesLock for no-preload-622000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:50.894887    6128 start.go:364] duration metric: took 24.708µs to acquireMachinesLock for "no-preload-622000"
	I0829 12:06:50.894895    6128 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:06:50.894899    6128 fix.go:54] fixHost starting: 
	I0829 12:06:50.895013    6128 fix.go:112] recreateIfNeeded on no-preload-622000: state=Stopped err=<nil>
	W0829 12:06:50.895020    6128 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:06:50.903386    6128 out.go:177] * Restarting existing qemu2 VM for "no-preload-622000" ...
	I0829 12:06:50.907431    6128 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:50.907466    6128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:0c:62:c7:d9:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:50.909351    6128 main.go:141] libmachine: STDOUT: 
	I0829 12:06:50.909371    6128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:50.909395    6128 fix.go:56] duration metric: took 14.496667ms for fixHost
	I0829 12:06:50.909398    6128 start.go:83] releasing machines lock for "no-preload-622000", held for 14.507625ms
	W0829 12:06:50.909406    6128 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:50.909433    6128 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:50.909437    6128 start.go:729] Will try again in 5 seconds ...
	I0829 12:06:55.911537    6128 start.go:360] acquireMachinesLock for no-preload-622000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:55.912090    6128 start.go:364] duration metric: took 425.125µs to acquireMachinesLock for "no-preload-622000"
	I0829 12:06:55.912264    6128 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:06:55.912284    6128 fix.go:54] fixHost starting: 
	I0829 12:06:55.913092    6128 fix.go:112] recreateIfNeeded on no-preload-622000: state=Stopped err=<nil>
	W0829 12:06:55.913119    6128 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:06:55.917934    6128 out.go:177] * Restarting existing qemu2 VM for "no-preload-622000" ...
	I0829 12:06:55.922500    6128 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:55.922761    6128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:0c:62:c7:d9:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/no-preload-622000/disk.qcow2
	I0829 12:06:55.932764    6128 main.go:141] libmachine: STDOUT: 
	I0829 12:06:55.932837    6128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:55.932929    6128 fix.go:56] duration metric: took 20.645125ms for fixHost
	I0829 12:06:55.932948    6128 start.go:83] releasing machines lock for "no-preload-622000", held for 20.807791ms
	W0829 12:06:55.933156    6128 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-622000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-622000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:55.940653    6128 out.go:201] 
	W0829 12:06:55.942385    6128 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:06:55.942436    6128 out.go:270] * 
	* 
	W0829 12:06:55.945275    6128 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:06:55.954684    6128 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-622000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (68.210333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-622000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (32.306208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-622000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-622000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-622000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.82575ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-622000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-622000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (30.386583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-622000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (29.895875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-622000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-622000 --alsologtostderr -v=1: exit status 83 (41.429ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-622000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-622000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:56.226596    6149 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:56.226743    6149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:56.226746    6149 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:56.226748    6149 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:56.226854    6149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:56.227057    6149 out.go:352] Setting JSON to false
	I0829 12:06:56.227065    6149 mustload.go:65] Loading cluster: no-preload-622000
	I0829 12:06:56.227258    6149 config.go:182] Loaded profile config "no-preload-622000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:56.231391    6149 out.go:177] * The control-plane node no-preload-622000 host is not running: state=Stopped
	I0829 12:06:56.234385    6149 out.go:177]   To start a cluster, run: "minikube start -p no-preload-622000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-622000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (29.38025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (30.130375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-622000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (9.870092375s)

                                                
                                                
-- stdout --
	* [embed-certs-142000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-142000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:06:56.544762    6166 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:06:56.544910    6166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:56.544913    6166 out.go:358] Setting ErrFile to fd 2...
	I0829 12:06:56.544915    6166 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:06:56.545048    6166 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:06:56.546132    6166 out.go:352] Setting JSON to false
	I0829 12:06:56.562363    6166 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3980,"bootTime":1724954436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:06:56.562436    6166 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:06:56.565388    6166 out.go:177] * [embed-certs-142000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:06:56.572371    6166 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:06:56.572392    6166 notify.go:220] Checking for updates...
	I0829 12:06:56.579339    6166 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:06:56.580936    6166 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:06:56.584358    6166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:06:56.587323    6166 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:06:56.590397    6166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:06:56.593689    6166 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:56.593756    6166 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:06:56.593805    6166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:06:56.598376    6166 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:06:56.605375    6166 start.go:297] selected driver: qemu2
	I0829 12:06:56.605382    6166 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:06:56.605390    6166 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:06:56.607837    6166 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:06:56.610326    6166 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:06:56.613406    6166 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:06:56.613428    6166 cni.go:84] Creating CNI manager for ""
	I0829 12:06:56.613435    6166 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:06:56.613441    6166 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:06:56.613465    6166 start.go:340] cluster config:
	{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:06:56.617341    6166 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:06:56.626372    6166 out.go:177] * Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	I0829 12:06:56.630321    6166 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:06:56.630335    6166 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:06:56.630346    6166 cache.go:56] Caching tarball of preloaded images
	I0829 12:06:56.630406    6166 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:06:56.630412    6166 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:06:56.630469    6166 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/embed-certs-142000/config.json ...
	I0829 12:06:56.630480    6166 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/embed-certs-142000/config.json: {Name:mk7e0d679e4c7d5c05fbde1243adaed3f2eef4e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:06:56.630827    6166 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:06:56.630865    6166 start.go:364] duration metric: took 31.167µs to acquireMachinesLock for "embed-certs-142000"
	I0829 12:06:56.630877    6166 start.go:93] Provisioning new machine with config: &{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:06:56.630907    6166 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:06:56.639332    6166 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:06:56.657843    6166 start.go:159] libmachine.API.Create for "embed-certs-142000" (driver="qemu2")
	I0829 12:06:56.657876    6166 client.go:168] LocalClient.Create starting
	I0829 12:06:56.657947    6166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:06:56.657977    6166 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:56.657987    6166 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:56.658029    6166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:06:56.658054    6166 main.go:141] libmachine: Decoding PEM data...
	I0829 12:06:56.658062    6166 main.go:141] libmachine: Parsing certificate...
	I0829 12:06:56.658445    6166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:06:56.818543    6166 main.go:141] libmachine: Creating SSH key...
	I0829 12:06:56.902458    6166 main.go:141] libmachine: Creating Disk image...
	I0829 12:06:56.902463    6166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:06:56.902640    6166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:06:56.911931    6166 main.go:141] libmachine: STDOUT: 
	I0829 12:06:56.911950    6166 main.go:141] libmachine: STDERR: 
	I0829 12:06:56.912005    6166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2 +20000M
	I0829 12:06:56.920062    6166 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:06:56.920075    6166 main.go:141] libmachine: STDERR: 
	I0829 12:06:56.920094    6166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:06:56.920098    6166 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:06:56.920114    6166 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:06:56.920141    6166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:45:76:c0:7a:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:06:56.921777    6166 main.go:141] libmachine: STDOUT: 
	I0829 12:06:56.921791    6166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:06:56.921806    6166 client.go:171] duration metric: took 263.931625ms to LocalClient.Create
	I0829 12:06:58.923950    6166 start.go:128] duration metric: took 2.293072209s to createHost
	I0829 12:06:58.924030    6166 start.go:83] releasing machines lock for "embed-certs-142000", held for 2.29321425s
	W0829 12:06:58.924075    6166 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:58.935189    6166 out.go:177] * Deleting "embed-certs-142000" in qemu2 ...
	W0829 12:06:58.975401    6166 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:06:58.975439    6166 start.go:729] Will try again in 5 seconds ...
	I0829 12:07:03.975980    6166 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:03.976400    6166 start.go:364] duration metric: took 331.416µs to acquireMachinesLock for "embed-certs-142000"
	I0829 12:07:03.976523    6166 start.go:93] Provisioning new machine with config: &{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:07:03.976807    6166 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:07:03.988171    6166 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:07:04.038481    6166 start.go:159] libmachine.API.Create for "embed-certs-142000" (driver="qemu2")
	I0829 12:07:04.038529    6166 client.go:168] LocalClient.Create starting
	I0829 12:07:04.038650    6166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:07:04.038714    6166 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:04.038732    6166 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:04.038809    6166 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:07:04.038853    6166 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:04.038864    6166 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:04.039523    6166 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:07:04.209499    6166 main.go:141] libmachine: Creating SSH key...
	I0829 12:07:04.317731    6166 main.go:141] libmachine: Creating Disk image...
	I0829 12:07:04.317737    6166 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:07:04.317928    6166 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:07:04.327312    6166 main.go:141] libmachine: STDOUT: 
	I0829 12:07:04.327339    6166 main.go:141] libmachine: STDERR: 
	I0829 12:07:04.327388    6166 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2 +20000M
	I0829 12:07:04.335379    6166 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:07:04.335401    6166 main.go:141] libmachine: STDERR: 
	I0829 12:07:04.335417    6166 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:07:04.335423    6166 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:07:04.335439    6166 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:04.335475    6166 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:41:7a:34:64:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:07:04.337136    6166 main.go:141] libmachine: STDOUT: 
	I0829 12:07:04.337159    6166 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:04.337174    6166 client.go:171] duration metric: took 298.646125ms to LocalClient.Create
	I0829 12:07:06.339321    6166 start.go:128] duration metric: took 2.362544459s to createHost
	I0829 12:07:06.339404    6166 start.go:83] releasing machines lock for "embed-certs-142000", held for 2.363039875s
	W0829 12:07:06.339799    6166 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:06.352652    6166 out.go:201] 
	W0829 12:07:06.356578    6166 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:06.356674    6166 out.go:270] * 
	* 
	W0829 12:07:06.359268    6166 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:07:06.372520    6166 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (67.898125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-142000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-142000 create -f testdata/busybox.yaml: exit status 1 (28.985792ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-142000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-142000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (29.782333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (29.40975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-142000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-142000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-142000 describe deploy/metrics-server -n kube-system: exit status 1 (27.1ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-142000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-142000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (30.994791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.190140417s)

                                                
                                                
-- stdout --
	* [embed-certs-142000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	* Restarting existing qemu2 VM for "embed-certs-142000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-142000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:09.710790    6219 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:09.710925    6219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:09.710929    6219 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:09.710935    6219 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:09.711063    6219 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:09.712128    6219 out.go:352] Setting JSON to false
	I0829 12:07:09.728289    6219 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3993,"bootTime":1724954436,"procs":468,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:07:09.728353    6219 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:07:09.733681    6219 out.go:177] * [embed-certs-142000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:07:09.739581    6219 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:07:09.739655    6219 notify.go:220] Checking for updates...
	I0829 12:07:09.746494    6219 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:07:09.749588    6219 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:07:09.752554    6219 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:07:09.755467    6219 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:07:09.758577    6219 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:07:09.761847    6219 config.go:182] Loaded profile config "embed-certs-142000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:09.762117    6219 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:07:09.765588    6219 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 12:07:09.772624    6219 start.go:297] selected driver: qemu2
	I0829 12:07:09.772631    6219 start.go:901] validating driver "qemu2" against &{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:embed-certs-142000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:09.772681    6219 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:07:09.775064    6219 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:07:09.775108    6219 cni.go:84] Creating CNI manager for ""
	I0829 12:07:09.775116    6219 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:07:09.775140    6219 start.go:340] cluster config:
	{Name:embed-certs-142000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-142000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:09.778904    6219 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:07:09.787562    6219 out.go:177] * Starting "embed-certs-142000" primary control-plane node in "embed-certs-142000" cluster
	I0829 12:07:09.794555    6219 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:07:09.794570    6219 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:07:09.794580    6219 cache.go:56] Caching tarball of preloaded images
	I0829 12:07:09.794641    6219 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:07:09.794647    6219 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:07:09.794703    6219 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/embed-certs-142000/config.json ...
	I0829 12:07:09.795239    6219 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:09.795276    6219 start.go:364] duration metric: took 29.666µs to acquireMachinesLock for "embed-certs-142000"
	I0829 12:07:09.795284    6219 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:07:09.795289    6219 fix.go:54] fixHost starting: 
	I0829 12:07:09.795413    6219 fix.go:112] recreateIfNeeded on embed-certs-142000: state=Stopped err=<nil>
	W0829 12:07:09.795422    6219 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:07:09.800650    6219 out.go:177] * Restarting existing qemu2 VM for "embed-certs-142000" ...
	I0829 12:07:09.810565    6219 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:09.810604    6219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:41:7a:34:64:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:07:09.812893    6219 main.go:141] libmachine: STDOUT: 
	I0829 12:07:09.812920    6219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:09.812945    6219 fix.go:56] duration metric: took 17.656375ms for fixHost
	I0829 12:07:09.812949    6219 start.go:83] releasing machines lock for "embed-certs-142000", held for 17.669709ms
	W0829 12:07:09.812957    6219 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:09.812988    6219 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:09.812993    6219 start.go:729] Will try again in 5 seconds ...
	I0829 12:07:14.815041    6219 start.go:360] acquireMachinesLock for embed-certs-142000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:14.815536    6219 start.go:364] duration metric: took 381.458µs to acquireMachinesLock for "embed-certs-142000"
	I0829 12:07:14.815717    6219 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:07:14.815743    6219 fix.go:54] fixHost starting: 
	I0829 12:07:14.816519    6219 fix.go:112] recreateIfNeeded on embed-certs-142000: state=Stopped err=<nil>
	W0829 12:07:14.816552    6219 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:07:14.822160    6219 out.go:177] * Restarting existing qemu2 VM for "embed-certs-142000" ...
	I0829 12:07:14.828993    6219 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:14.829230    6219 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:41:7a:34:64:fb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/embed-certs-142000/disk.qcow2
	I0829 12:07:14.838348    6219 main.go:141] libmachine: STDOUT: 
	I0829 12:07:14.838431    6219 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:14.838556    6219 fix.go:56] duration metric: took 22.813417ms for fixHost
	I0829 12:07:14.838577    6219 start.go:83] releasing machines lock for "embed-certs-142000", held for 22.983416ms
	W0829 12:07:14.838771    6219 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-142000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:14.845059    6219 out.go:201] 
	W0829 12:07:14.849133    6219 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:14.849176    6219 out.go:270] * 
	* 
	W0829 12:07:14.851591    6219 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:07:14.859002    6219 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-142000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (67.474709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-142000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (31.437792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-142000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-142000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-142000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.98975ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-142000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-142000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (29.070208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-142000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (29.199542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-142000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-142000 --alsologtostderr -v=1: exit status 83 (39.525166ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-142000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-142000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:15.126716    6247 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:15.126888    6247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:15.126891    6247 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:15.126894    6247 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:15.127036    6247 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:15.127260    6247 out.go:352] Setting JSON to false
	I0829 12:07:15.127269    6247 mustload.go:65] Loading cluster: embed-certs-142000
	I0829 12:07:15.127461    6247 config.go:182] Loaded profile config "embed-certs-142000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:15.129475    6247 out.go:177] * The control-plane node embed-certs-142000 host is not running: state=Stopped
	I0829 12:07:15.133636    6247 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-142000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-142000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (29.422083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (29.105459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-142000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-502000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-502000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.024701167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-502000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-502000" primary control-plane node in "default-k8s-diff-port-502000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-502000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:15.541541    6271 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:15.541660    6271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:15.541668    6271 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:15.541671    6271 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:15.541793    6271 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:15.543002    6271 out.go:352] Setting JSON to false
	I0829 12:07:15.559053    6271 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":3999,"bootTime":1724954436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:07:15.559124    6271 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:07:15.563500    6271 out.go:177] * [default-k8s-diff-port-502000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:07:15.571630    6271 notify.go:220] Checking for updates...
	I0829 12:07:15.574583    6271 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:07:15.583602    6271 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:07:15.587608    6271 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:07:15.590566    6271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:07:15.594582    6271 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:07:15.597573    6271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:07:15.600902    6271 config.go:182] Loaded profile config "cert-expiration-916000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:15.600971    6271 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:15.601028    6271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:07:15.605606    6271 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:07:15.612543    6271 start.go:297] selected driver: qemu2
	I0829 12:07:15.612549    6271 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:07:15.612554    6271 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:07:15.614898    6271 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 12:07:15.618594    6271 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:07:15.620038    6271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:07:15.620057    6271 cni.go:84] Creating CNI manager for ""
	I0829 12:07:15.620063    6271 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:07:15.620068    6271 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:07:15.620102    6271 start.go:340] cluster config:
	{Name:default-k8s-diff-port-502000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-502000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:15.623827    6271 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:07:15.631585    6271 out.go:177] * Starting "default-k8s-diff-port-502000" primary control-plane node in "default-k8s-diff-port-502000" cluster
	I0829 12:07:15.635553    6271 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:07:15.635566    6271 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:07:15.635575    6271 cache.go:56] Caching tarball of preloaded images
	I0829 12:07:15.635635    6271 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:07:15.635643    6271 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:07:15.635705    6271 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/default-k8s-diff-port-502000/config.json ...
	I0829 12:07:15.635717    6271 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/default-k8s-diff-port-502000/config.json: {Name:mkc7ae7e3be512d8474d160afdb2505119707ef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:07:15.635938    6271 start.go:360] acquireMachinesLock for default-k8s-diff-port-502000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:15.635977    6271 start.go:364] duration metric: took 29.916µs to acquireMachinesLock for "default-k8s-diff-port-502000"
	I0829 12:07:15.635989    6271 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-502000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-502000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:07:15.636016    6271 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:07:15.644520    6271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:07:15.663300    6271 start.go:159] libmachine.API.Create for "default-k8s-diff-port-502000" (driver="qemu2")
	I0829 12:07:15.663326    6271 client.go:168] LocalClient.Create starting
	I0829 12:07:15.663391    6271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:07:15.663426    6271 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:15.663434    6271 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:15.663472    6271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:07:15.663496    6271 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:15.663503    6271 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:15.663853    6271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:07:15.824827    6271 main.go:141] libmachine: Creating SSH key...
	I0829 12:07:15.977402    6271 main.go:141] libmachine: Creating Disk image...
	I0829 12:07:15.977408    6271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:07:15.977603    6271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:15.987609    6271 main.go:141] libmachine: STDOUT: 
	I0829 12:07:15.987629    6271 main.go:141] libmachine: STDERR: 
	I0829 12:07:15.987696    6271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2 +20000M
	I0829 12:07:15.995957    6271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:07:15.995976    6271 main.go:141] libmachine: STDERR: 
	I0829 12:07:15.995992    6271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:15.995996    6271 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:07:15.996014    6271 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:15.996040    6271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:8d:00:88:44:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:15.997708    6271 main.go:141] libmachine: STDOUT: 
	I0829 12:07:15.997725    6271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:15.997744    6271 client.go:171] duration metric: took 334.420459ms to LocalClient.Create
	I0829 12:07:17.999950    6271 start.go:128] duration metric: took 2.363962458s to createHost
	I0829 12:07:18.000034    6271 start.go:83] releasing machines lock for "default-k8s-diff-port-502000", held for 2.364106667s
	W0829 12:07:18.000106    6271 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:18.017496    6271 out.go:177] * Deleting "default-k8s-diff-port-502000" in qemu2 ...
	W0829 12:07:18.049119    6271 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:18.049154    6271 start.go:729] Will try again in 5 seconds ...
	I0829 12:07:23.051223    6271 start.go:360] acquireMachinesLock for default-k8s-diff-port-502000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:23.051736    6271 start.go:364] duration metric: took 353.375µs to acquireMachinesLock for "default-k8s-diff-port-502000"
	I0829 12:07:23.051893    6271 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-502000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-502000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:07:23.052151    6271 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:07:23.061774    6271 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:07:23.112906    6271 start.go:159] libmachine.API.Create for "default-k8s-diff-port-502000" (driver="qemu2")
	I0829 12:07:23.113108    6271 client.go:168] LocalClient.Create starting
	I0829 12:07:23.113249    6271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:07:23.113325    6271 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:23.113341    6271 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:23.113412    6271 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:07:23.113454    6271 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:23.113465    6271 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:23.113971    6271 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:07:23.297190    6271 main.go:141] libmachine: Creating SSH key...
	I0829 12:07:23.469453    6271 main.go:141] libmachine: Creating Disk image...
	I0829 12:07:23.469459    6271 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:07:23.469654    6271 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:23.479575    6271 main.go:141] libmachine: STDOUT: 
	I0829 12:07:23.479593    6271 main.go:141] libmachine: STDERR: 
	I0829 12:07:23.479643    6271 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2 +20000M
	I0829 12:07:23.487832    6271 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:07:23.487847    6271 main.go:141] libmachine: STDERR: 
	I0829 12:07:23.487863    6271 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:23.487872    6271 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:07:23.487886    6271 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:23.487925    6271 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:bf:d9:a5:a4:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:23.489606    6271 main.go:141] libmachine: STDOUT: 
	I0829 12:07:23.489622    6271 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:23.489646    6271 client.go:171] duration metric: took 376.529333ms to LocalClient.Create
	I0829 12:07:25.491764    6271 start.go:128] duration metric: took 2.439632083s to createHost
	I0829 12:07:25.491821    6271 start.go:83] releasing machines lock for "default-k8s-diff-port-502000", held for 2.440121875s
	W0829 12:07:25.492192    6271 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-502000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-502000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:25.508819    6271 out.go:201] 
	W0829 12:07:25.513852    6271 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:25.513879    6271 out.go:270] * 
	* 
	W0829 12:07:25.516793    6271 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:07:25.525793    6271 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-502000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (67.531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-182000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-182000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (10.043268542s)

                                                
                                                
-- stdout --
	* [newest-cni-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-182000" primary control-plane node in "newest-cni-182000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-182000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:18.642530    6287 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:18.642788    6287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:18.642794    6287 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:18.642797    6287 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:18.642995    6287 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:18.644320    6287 out.go:352] Setting JSON to false
	I0829 12:07:18.660811    6287 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4002,"bootTime":1724954436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:07:18.660880    6287 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:07:18.667424    6287 out.go:177] * [newest-cni-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:07:18.674488    6287 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:07:18.674543    6287 notify.go:220] Checking for updates...
	I0829 12:07:18.682371    6287 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:07:18.685343    6287 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:07:18.688402    6287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:07:18.691353    6287 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:07:18.694370    6287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:07:18.697762    6287 config.go:182] Loaded profile config "default-k8s-diff-port-502000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:18.697831    6287 config.go:182] Loaded profile config "multinode-531000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:18.697884    6287 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:07:18.702253    6287 out.go:177] * Using the qemu2 driver based on user configuration
	I0829 12:07:18.709340    6287 start.go:297] selected driver: qemu2
	I0829 12:07:18.709346    6287 start.go:901] validating driver "qemu2" against <nil>
	I0829 12:07:18.709351    6287 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:07:18.711710    6287 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0829 12:07:18.711741    6287 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0829 12:07:18.720292    6287 out.go:177] * Automatically selected the socket_vmnet network
	I0829 12:07:18.723436    6287 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0829 12:07:18.723469    6287 cni.go:84] Creating CNI manager for ""
	I0829 12:07:18.723477    6287 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:07:18.723482    6287 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 12:07:18.723509    6287 start.go:340] cluster config:
	{Name:newest-cni-182000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:18.727255    6287 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:07:18.734343    6287 out.go:177] * Starting "newest-cni-182000" primary control-plane node in "newest-cni-182000" cluster
	I0829 12:07:18.737239    6287 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:07:18.737257    6287 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:07:18.737272    6287 cache.go:56] Caching tarball of preloaded images
	I0829 12:07:18.737356    6287 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:07:18.737363    6287 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:07:18.737429    6287 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/newest-cni-182000/config.json ...
	I0829 12:07:18.737447    6287 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/newest-cni-182000/config.json: {Name:mk67222b06a6f6515d857e15b0833b2f4ae0e793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 12:07:18.737680    6287 start.go:360] acquireMachinesLock for newest-cni-182000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:18.737716    6287 start.go:364] duration metric: took 30.166µs to acquireMachinesLock for "newest-cni-182000"
	I0829 12:07:18.737727    6287 start.go:93] Provisioning new machine with config: &{Name:newest-cni-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:07:18.737764    6287 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:07:18.742420    6287 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:07:18.761344    6287 start.go:159] libmachine.API.Create for "newest-cni-182000" (driver="qemu2")
	I0829 12:07:18.761372    6287 client.go:168] LocalClient.Create starting
	I0829 12:07:18.761448    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:07:18.761484    6287 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:18.761494    6287 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:18.761532    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:07:18.761556    6287 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:18.761565    6287 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:18.761927    6287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:07:18.921452    6287 main.go:141] libmachine: Creating SSH key...
	I0829 12:07:19.125216    6287 main.go:141] libmachine: Creating Disk image...
	I0829 12:07:19.125223    6287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:07:19.125423    6287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:19.135451    6287 main.go:141] libmachine: STDOUT: 
	I0829 12:07:19.135481    6287 main.go:141] libmachine: STDERR: 
	I0829 12:07:19.135528    6287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2 +20000M
	I0829 12:07:19.143538    6287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:07:19.143552    6287 main.go:141] libmachine: STDERR: 
	I0829 12:07:19.143565    6287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:19.143569    6287 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:07:19.143578    6287 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:19.143607    6287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:85:86:05:cd:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:19.145169    6287 main.go:141] libmachine: STDOUT: 
	I0829 12:07:19.145182    6287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:19.145210    6287 client.go:171] duration metric: took 383.841291ms to LocalClient.Create
	I0829 12:07:21.147356    6287 start.go:128] duration metric: took 2.409625958s to createHost
	I0829 12:07:21.147464    6287 start.go:83] releasing machines lock for "newest-cni-182000", held for 2.409770542s
	W0829 12:07:21.147536    6287 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:21.158680    6287 out.go:177] * Deleting "newest-cni-182000" in qemu2 ...
	W0829 12:07:21.197500    6287 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:21.197529    6287 start.go:729] Will try again in 5 seconds ...
	I0829 12:07:26.199584    6287 start.go:360] acquireMachinesLock for newest-cni-182000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:26.200036    6287 start.go:364] duration metric: took 344.125µs to acquireMachinesLock for "newest-cni-182000"
	I0829 12:07:26.200204    6287 start.go:93] Provisioning new machine with config: &{Name:newest-cni-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.0 ClusterName:newest-cni-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0829 12:07:26.200485    6287 start.go:125] createHost starting for "" (driver="qemu2")
	I0829 12:07:26.209169    6287 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0829 12:07:26.259581    6287 start.go:159] libmachine.API.Create for "newest-cni-182000" (driver="qemu2")
	I0829 12:07:26.259635    6287 client.go:168] LocalClient.Create starting
	I0829 12:07:26.259724    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/ca.pem
	I0829 12:07:26.259802    6287 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:26.259818    6287 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:26.259883    6287 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19531-965/.minikube/certs/cert.pem
	I0829 12:07:26.259919    6287 main.go:141] libmachine: Decoding PEM data...
	I0829 12:07:26.259932    6287 main.go:141] libmachine: Parsing certificate...
	I0829 12:07:26.260450    6287 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19531-965/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso...
	I0829 12:07:26.444372    6287 main.go:141] libmachine: Creating SSH key...
	I0829 12:07:26.597126    6287 main.go:141] libmachine: Creating Disk image...
	I0829 12:07:26.597133    6287 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0829 12:07:26.597340    6287 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2.raw /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:26.607094    6287 main.go:141] libmachine: STDOUT: 
	I0829 12:07:26.607116    6287 main.go:141] libmachine: STDERR: 
	I0829 12:07:26.607175    6287 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2 +20000M
	I0829 12:07:26.615242    6287 main.go:141] libmachine: STDOUT: Image resized.
	
	I0829 12:07:26.615271    6287 main.go:141] libmachine: STDERR: 
	I0829 12:07:26.615286    6287 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:26.615290    6287 main.go:141] libmachine: Starting QEMU VM...
	I0829 12:07:26.615299    6287 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:26.615330    6287 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:1d:dc:fa:2f:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:26.617058    6287 main.go:141] libmachine: STDOUT: 
	I0829 12:07:26.617076    6287 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:26.617089    6287 client.go:171] duration metric: took 357.458166ms to LocalClient.Create
	I0829 12:07:28.619225    6287 start.go:128] duration metric: took 2.418767792s to createHost
	I0829 12:07:28.619291    6287 start.go:83] releasing machines lock for "newest-cni-182000", held for 2.419289375s
	W0829 12:07:28.619622    6287 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-182000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:28.632003    6287 out.go:201] 
	W0829 12:07:28.636109    6287 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:28.636131    6287 out.go:270] * 
	* 
	W0829 12:07:28.638942    6287 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:07:28.646204    6287 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-182000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000: exit status 7 (63.59875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-502000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-502000 create -f testdata/busybox.yaml: exit status 1 (29.021708ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-502000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-502000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (29.328667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (29.356084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-502000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-502000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-502000 describe deploy/metrics-server -n kube-system: exit status 1 (26.614625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-502000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-502000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (29.855667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-502000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-502000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (6.004772583s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-502000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-502000" primary control-plane node in "default-k8s-diff-port-502000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-502000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-502000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:27.730582    6331 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:27.730697    6331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:27.730700    6331 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:27.730702    6331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:27.730826    6331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:27.731840    6331 out.go:352] Setting JSON to false
	I0829 12:07:27.747812    6331 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4011,"bootTime":1724954436,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:07:27.747890    6331 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:07:27.752417    6331 out.go:177] * [default-k8s-diff-port-502000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:07:27.758601    6331 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:07:27.758670    6331 notify.go:220] Checking for updates...
	I0829 12:07:27.766515    6331 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:07:27.769574    6331 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:07:27.771165    6331 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:07:27.778631    6331 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:07:27.781571    6331 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:07:27.783205    6331 config.go:182] Loaded profile config "default-k8s-diff-port-502000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:27.783480    6331 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:07:27.787566    6331 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 12:07:27.794460    6331 start.go:297] selected driver: qemu2
	I0829 12:07:27.794467    6331 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-502000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-502000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:27.794525    6331 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:07:27.796795    6331 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0829 12:07:27.796826    6331 cni.go:84] Creating CNI manager for ""
	I0829 12:07:27.796834    6331 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:07:27.796864    6331 start.go:340] cluster config:
	{Name:default-k8s-diff-port-502000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:default-k8s-diff-port-502000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:27.800536    6331 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:07:27.808598    6331 out.go:177] * Starting "default-k8s-diff-port-502000" primary control-plane node in "default-k8s-diff-port-502000" cluster
	I0829 12:07:27.813569    6331 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:07:27.813588    6331 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:07:27.813604    6331 cache.go:56] Caching tarball of preloaded images
	I0829 12:07:27.813673    6331 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:07:27.813679    6331 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:07:27.813735    6331 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/default-k8s-diff-port-502000/config.json ...
	I0829 12:07:27.814308    6331 start.go:360] acquireMachinesLock for default-k8s-diff-port-502000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:28.619408    6331 start.go:364] duration metric: took 805.09375ms to acquireMachinesLock for "default-k8s-diff-port-502000"
	I0829 12:07:28.619590    6331 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:07:28.619668    6331 fix.go:54] fixHost starting: 
	I0829 12:07:28.620379    6331 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502000: state=Stopped err=<nil>
	W0829 12:07:28.620428    6331 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:07:28.632002    6331 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-502000" ...
	I0829 12:07:28.636084    6331 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:28.636305    6331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:bf:d9:a5:a4:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:28.646324    6331 main.go:141] libmachine: STDOUT: 
	I0829 12:07:28.646411    6331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:28.646532    6331 fix.go:56] duration metric: took 26.882584ms for fixHost
	I0829 12:07:28.646551    6331 start.go:83] releasing machines lock for "default-k8s-diff-port-502000", held for 27.107791ms
	W0829 12:07:28.646573    6331 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:28.646754    6331 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:28.646770    6331 start.go:729] Will try again in 5 seconds ...
	I0829 12:07:33.648938    6331 start.go:360] acquireMachinesLock for default-k8s-diff-port-502000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:33.649339    6331 start.go:364] duration metric: took 291.125µs to acquireMachinesLock for "default-k8s-diff-port-502000"
	I0829 12:07:33.649469    6331 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:07:33.649491    6331 fix.go:54] fixHost starting: 
	I0829 12:07:33.650240    6331 fix.go:112] recreateIfNeeded on default-k8s-diff-port-502000: state=Stopped err=<nil>
	W0829 12:07:33.650271    6331 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:07:33.659952    6331 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-502000" ...
	I0829 12:07:33.663935    6331 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:33.664148    6331 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:bf:d9:a5:a4:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/default-k8s-diff-port-502000/disk.qcow2
	I0829 12:07:33.673073    6331 main.go:141] libmachine: STDOUT: 
	I0829 12:07:33.673135    6331 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:33.673203    6331 fix.go:56] duration metric: took 23.715208ms for fixHost
	I0829 12:07:33.673220    6331 start.go:83] releasing machines lock for "default-k8s-diff-port-502000", held for 23.861583ms
	W0829 12:07:33.673374    6331 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-502000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-502000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:33.680919    6331 out.go:201] 
	W0829 12:07:33.683995    6331 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:33.684019    6331 out.go:270] * 
	* 
	W0829 12:07:33.686582    6331 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:07:33.694953    6331 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-502000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (68.6755ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (6.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-182000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-182000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0: exit status 80 (5.181838625s)

                                                
                                                
-- stdout --
	* [newest-cni-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-182000" primary control-plane node in "newest-cni-182000" cluster
	* Restarting existing qemu2 VM for "newest-cni-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-182000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:32.149986    6366 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:32.150106    6366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:32.150109    6366 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:32.150112    6366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:32.150223    6366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:32.151188    6366 out.go:352] Setting JSON to false
	I0829 12:07:32.167131    6366 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":4016,"bootTime":1724954436,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 12:07:32.167202    6366 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 12:07:32.172332    6366 out.go:177] * [newest-cni-182000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 12:07:32.179517    6366 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 12:07:32.179568    6366 notify.go:220] Checking for updates...
	I0829 12:07:32.185490    6366 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 12:07:32.188511    6366 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 12:07:32.190007    6366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 12:07:32.193449    6366 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 12:07:32.196504    6366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 12:07:32.199776    6366 config.go:182] Loaded profile config "newest-cni-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:32.200042    6366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 12:07:32.203432    6366 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 12:07:32.210471    6366 start.go:297] selected driver: qemu2
	I0829 12:07:32.210480    6366 start.go:901] validating driver "qemu2" against &{Name:newest-cni-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:newest-cni-182000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:32.210538    6366 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 12:07:32.212716    6366 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0829 12:07:32.212764    6366 cni.go:84] Creating CNI manager for ""
	I0829 12:07:32.212771    6366 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 12:07:32.212795    6366 start.go:340] cluster config:
	{Name:newest-cni-182000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:newest-cni-182000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 12:07:32.216181    6366 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 12:07:32.223482    6366 out.go:177] * Starting "newest-cni-182000" primary control-plane node in "newest-cni-182000" cluster
	I0829 12:07:32.227480    6366 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 12:07:32.227499    6366 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 12:07:32.227515    6366 cache.go:56] Caching tarball of preloaded images
	I0829 12:07:32.227584    6366 preload.go:172] Found /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0829 12:07:32.227594    6366 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0829 12:07:32.227661    6366 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/newest-cni-182000/config.json ...
	I0829 12:07:32.228083    6366 start.go:360] acquireMachinesLock for newest-cni-182000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:32.228112    6366 start.go:364] duration metric: took 23.333µs to acquireMachinesLock for "newest-cni-182000"
	I0829 12:07:32.228121    6366 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:07:32.228126    6366 fix.go:54] fixHost starting: 
	I0829 12:07:32.228249    6366 fix.go:112] recreateIfNeeded on newest-cni-182000: state=Stopped err=<nil>
	W0829 12:07:32.228257    6366 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:07:32.232678    6366 out.go:177] * Restarting existing qemu2 VM for "newest-cni-182000" ...
	I0829 12:07:32.240455    6366 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:32.240497    6366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:1d:dc:fa:2f:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:32.242546    6366 main.go:141] libmachine: STDOUT: 
	I0829 12:07:32.242569    6366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:32.242598    6366 fix.go:56] duration metric: took 14.47225ms for fixHost
	I0829 12:07:32.242603    6366 start.go:83] releasing machines lock for "newest-cni-182000", held for 14.48625ms
	W0829 12:07:32.242610    6366 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:32.242643    6366 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:32.242648    6366 start.go:729] Will try again in 5 seconds ...
	I0829 12:07:37.244795    6366 start.go:360] acquireMachinesLock for newest-cni-182000: {Name:mk54c878341df5948b162b01843176e59d4f0973 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0829 12:07:37.245484    6366 start.go:364] duration metric: took 536µs to acquireMachinesLock for "newest-cni-182000"
	I0829 12:07:37.245678    6366 start.go:96] Skipping create...Using existing machine configuration
	I0829 12:07:37.245701    6366 fix.go:54] fixHost starting: 
	I0829 12:07:37.246552    6366 fix.go:112] recreateIfNeeded on newest-cni-182000: state=Stopped err=<nil>
	W0829 12:07:37.246581    6366 fix.go:138] unexpected machine state, will restart: <nil>
	I0829 12:07:37.251213    6366 out.go:177] * Restarting existing qemu2 VM for "newest-cni-182000" ...
	I0829 12:07:37.258930    6366 qemu.go:418] Using hvf for hardware acceleration
	I0829 12:07:37.259174    6366 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:1d:dc:fa:2f:07 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19531-965/.minikube/machines/newest-cni-182000/disk.qcow2
	I0829 12:07:37.269043    6366 main.go:141] libmachine: STDOUT: 
	I0829 12:07:37.269120    6366 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0829 12:07:37.269204    6366 fix.go:56] duration metric: took 23.5045ms for fixHost
	I0829 12:07:37.269221    6366 start.go:83] releasing machines lock for "newest-cni-182000", held for 23.687375ms
	W0829 12:07:37.269397    6366 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-182000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0829 12:07:37.277057    6366 out.go:201] 
	W0829 12:07:37.280088    6366 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0829 12:07:37.280127    6366 out.go:270] * 
	* 
	W0829 12:07:37.282924    6366 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 12:07:37.291980    6366 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-182000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000: exit status 7 (68.960583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-502000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (32.727875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-502000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-502000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-502000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.486417ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-502000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-502000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (28.7615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-502000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (28.715959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-502000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-502000 --alsologtostderr -v=1: exit status 83 (39.884667ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-502000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-502000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:33.962273    6385 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:33.962426    6385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:33.962429    6385 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:33.962431    6385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:33.962559    6385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:33.962775    6385 out.go:352] Setting JSON to false
	I0829 12:07:33.962782    6385 mustload.go:65] Loading cluster: default-k8s-diff-port-502000
	I0829 12:07:33.962982    6385 config.go:182] Loaded profile config "default-k8s-diff-port-502000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:33.967154    6385 out.go:177] * The control-plane node default-k8s-diff-port-502000 host is not running: state=Stopped
	I0829 12:07:33.971289    6385 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-502000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-502000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (28.370666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (29.514375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-502000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-182000 image list --format=json
start_stop_delete_test.go:304: v1.31.0 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.0",
- 	"registry.k8s.io/kube-controller-manager:v1.31.0",
- 	"registry.k8s.io/kube-proxy:v1.31.0",
- 	"registry.k8s.io/kube-scheduler:v1.31.0",
- 	"registry.k8s.io/pause:3.10",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000: exit status 7 (30.10475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-182000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-182000 --alsologtostderr -v=1: exit status 83 (41.650792ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-182000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-182000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 12:07:37.476687    6409 out.go:345] Setting OutFile to fd 1 ...
	I0829 12:07:37.476840    6409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:37.476843    6409 out.go:358] Setting ErrFile to fd 2...
	I0829 12:07:37.476846    6409 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 12:07:37.476963    6409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 12:07:37.477178    6409 out.go:352] Setting JSON to false
	I0829 12:07:37.477185    6409 mustload.go:65] Loading cluster: newest-cni-182000
	I0829 12:07:37.477382    6409 config.go:182] Loaded profile config "newest-cni-182000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 12:07:37.482080    6409 out.go:177] * The control-plane node newest-cni-182000 host is not running: state=Stopped
	I0829 12:07:37.485848    6409 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-182000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-182000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000: exit status 7 (30.158791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-182000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000: exit status 7 (30.305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-182000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    

Test pass (155/274)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.09
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.1
12 TestDownloadOnly/v1.31.0/json-events 7.98
13 TestDownloadOnly/v1.31.0/preload-exists 0
16 TestDownloadOnly/v1.31.0/kubectl 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.1
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.1
21 TestBinaryMirror 0.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 199.87
29 TestAddons/serial/Volcano 38.24
31 TestAddons/serial/GCPAuth/Namespaces 0.09
34 TestAddons/parallel/Ingress 18.36
35 TestAddons/parallel/InspektorGadget 10.29
36 TestAddons/parallel/MetricsServer 5.3
39 TestAddons/parallel/CSI 48.33
40 TestAddons/parallel/Headlamp 16.6
41 TestAddons/parallel/CloudSpanner 5.19
42 TestAddons/parallel/LocalPath 9.58
43 TestAddons/parallel/NvidiaDevicePlugin 5.15
44 TestAddons/parallel/Yakd 10.29
45 TestAddons/StoppedEnableDisable 12.4
53 TestHyperKitDriverInstallOrUpdate 10.52
56 TestErrorSpam/setup 36.08
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.24
59 TestErrorSpam/pause 0.69
60 TestErrorSpam/unpause 0.59
61 TestErrorSpam/stop 64.3
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 49.7
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 38.98
68 TestFunctional/serial/KubeContext 0.03
69 TestFunctional/serial/KubectlGetPods 0.05
72 TestFunctional/serial/CacheCmd/cache/add_remote 5.17
73 TestFunctional/serial/CacheCmd/cache/add_local 1.17
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.08
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.18
78 TestFunctional/serial/CacheCmd/cache/delete 0.07
79 TestFunctional/serial/MinikubeKubectlCmd 0.83
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.03
81 TestFunctional/serial/ExtraConfig 34.33
82 TestFunctional/serial/ComponentHealth 0.04
83 TestFunctional/serial/LogsCmd 0.65
84 TestFunctional/serial/LogsFileCmd 0.66
85 TestFunctional/serial/InvalidService 4.25
87 TestFunctional/parallel/ConfigCmd 0.22
88 TestFunctional/parallel/DashboardCmd 9.94
89 TestFunctional/parallel/DryRun 0.29
90 TestFunctional/parallel/InternationalLanguage 0.11
91 TestFunctional/parallel/StatusCmd 0.26
96 TestFunctional/parallel/AddonsCmd 0.1
97 TestFunctional/parallel/PersistentVolumeClaim 26.63
99 TestFunctional/parallel/SSHCmd 0.13
100 TestFunctional/parallel/CpCmd 0.5
102 TestFunctional/parallel/FileSync 0.07
103 TestFunctional/parallel/CertSync 0.42
107 TestFunctional/parallel/NodeLabels 0.04
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.09
111 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/Version/short 0.04
113 TestFunctional/parallel/Version/components 0.19
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.07
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.08
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.07
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.08
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.82
119 TestFunctional/parallel/ImageCommands/Setup 1.84
120 TestFunctional/parallel/DockerEnv/bash 0.32
121 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
122 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
123 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
124 TestFunctional/parallel/ServiceCmd/DeployApp 11.09
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.45
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.36
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.15
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.15
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.16
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.23
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.2
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.23
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.11
137 TestFunctional/parallel/ServiceCmd/List 0.13
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.09
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.1
140 TestFunctional/parallel/ServiceCmd/Format 0.1
141 TestFunctional/parallel/ServiceCmd/URL 0.1
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
145 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
146 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.13
149 TestFunctional/parallel/ProfileCmd/profile_list 0.13
150 TestFunctional/parallel/ProfileCmd/profile_json_output 0.12
151 TestFunctional/parallel/MountCmd/any-port 7.43
152 TestFunctional/parallel/MountCmd/specific-port 0.92
153 TestFunctional/parallel/MountCmd/VerifyCleanup 0.93
154 TestFunctional/delete_echo-server_images 0.05
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.01
160 TestMultiControlPlane/serial/StartCluster 198.92
161 TestMultiControlPlane/serial/DeployApp 5.25
162 TestMultiControlPlane/serial/PingHostFromPods 0.72
163 TestMultiControlPlane/serial/AddWorkerNode 87.57
164 TestMultiControlPlane/serial/NodeLabels 0.13
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.24
166 TestMultiControlPlane/serial/CopyFile 4.18
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 80.04
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.05
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 3.59
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.2
212 TestMainNoArgs 0.03
257 TestStoppedBinaryUpgrade/Setup 1.05
259 TestStoppedBinaryUpgrade/MinikubeLogs 0.72
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
275 TestNoKubernetes/serial/ProfileList 0.11
277 TestNoKubernetes/serial/Stop 3.58
280 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
294 TestStartStop/group/old-k8s-version/serial/Stop 3.44
295 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
305 TestStartStop/group/no-preload/serial/Stop 2.93
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
316 TestStartStop/group/embed-certs/serial/Stop 2.89
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.77
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
334 TestStartStop/group/newest-cni/serial/Stop 3.22
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.12
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-031000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-031000: exit status 85 (91.557208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-031000 | jenkins | v1.33.1 | 29 Aug 24 11:04 PDT |          |
	|         | -p download-only-031000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 11:04:35
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 11:04:35.185082    1420 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:04:35.185238    1420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:04:35.185241    1420 out.go:358] Setting ErrFile to fd 2...
	I0829 11:04:35.185244    1420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:04:35.185368    1420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	W0829 11:04:35.185476    1420 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19531-965/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19531-965/.minikube/config/config.json: no such file or directory
	I0829 11:04:35.186676    1420 out.go:352] Setting JSON to true
	I0829 11:04:35.203684    1420 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":239,"bootTime":1724954436,"procs":459,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:04:35.203756    1420 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:04:35.208758    1420 out.go:97] [download-only-031000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:04:35.208912    1420 notify.go:220] Checking for updates...
	W0829 11:04:35.208932    1420 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball: no such file or directory
	I0829 11:04:35.212630    1420 out.go:169] MINIKUBE_LOCATION=19531
	I0829 11:04:35.215609    1420 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:04:35.220676    1420 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:04:35.223701    1420 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:04:35.226585    1420 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	W0829 11:04:35.232621    1420 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 11:04:35.232825    1420 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:04:35.237600    1420 out.go:97] Using the qemu2 driver based on user configuration
	I0829 11:04:35.237620    1420 start.go:297] selected driver: qemu2
	I0829 11:04:35.237636    1420 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:04:35.237717    1420 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:04:35.241600    1420 out.go:169] Automatically selected the socket_vmnet network
	I0829 11:04:35.247379    1420 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0829 11:04:35.247473    1420 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 11:04:35.247562    1420 cni.go:84] Creating CNI manager for ""
	I0829 11:04:35.247579    1420 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0829 11:04:35.247628    1420 start.go:340] cluster config:
	{Name:download-only-031000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-031000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:04:35.252996    1420 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:04:35.255669    1420 out.go:97] Downloading VM boot image ...
	I0829 11:04:35.255689    1420 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/iso/arm64/minikube-v1.33.1-1724775098-19521-arm64.iso
	I0829 11:04:46.944240    1420 out.go:97] Starting "download-only-031000" primary control-plane node in "download-only-031000" cluster
	I0829 11:04:46.944265    1420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 11:04:46.998519    1420 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 11:04:46.998555    1420 cache.go:56] Caching tarball of preloaded images
	I0829 11:04:46.998707    1420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 11:04:47.003793    1420 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0829 11:04:47.003800    1420 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 11:04:47.115685    1420 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0829 11:05:02.241365    1420 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 11:05:02.241532    1420 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 11:05:02.938473    1420 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0829 11:05:02.938656    1420 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/download-only-031000/config.json ...
	I0829 11:05:02.938673    1420 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/download-only-031000/config.json: {Name:mkc169edd70a2dc1a2bec2403108ab7bb4d18df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0829 11:05:02.938889    1420 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0829 11:05:02.939071    1420 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0829 11:05:03.436176    1420 out.go:193] 
	W0829 11:05:03.442196    1420 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19531-965/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920 0x10940f920] Decompressors:map[bz2:0x1400000f940 gz:0x1400000f948 tar:0x1400000f8f0 tar.bz2:0x1400000f900 tar.gz:0x1400000f910 tar.xz:0x1400000f920 tar.zst:0x1400000f930 tbz2:0x1400000f900 tgz:0x1400000f910 txz:0x1400000f920 tzst:0x1400000f930 xz:0x1400000f950 zip:0x1400000f960 zst:0x1400000f958] Getters:map[file:0x14001818550 http:0x14000578280 https:0x14000578500] Dir:false ProgressListe
ner:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0829 11:05:03.442228    1420 out_reason.go:110] 
	W0829 11:05:03.451189    1420 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0829 11:05:03.455130    1420 out.go:193] 
	
	
	* The control-plane node download-only-031000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-031000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-031000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (7.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-318000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-318000 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=qemu2 : (7.979326375s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (7.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-318000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-318000: exit status 85 (77.185375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-031000 | jenkins | v1.33.1 | 29 Aug 24 11:04 PDT |                     |
	|         | -p download-only-031000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| delete  | -p download-only-031000        | download-only-031000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT | 29 Aug 24 11:05 PDT |
	| start   | -o=json --download-only        | download-only-318000 | jenkins | v1.33.1 | 29 Aug 24 11:05 PDT |                     |
	|         | -p download-only-318000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/29 11:05:03
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.5 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0829 11:05:03.863878    1448 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:05:03.864021    1448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:05:03.864024    1448 out.go:358] Setting ErrFile to fd 2...
	I0829 11:05:03.864027    1448 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:05:03.864149    1448 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:05:03.865201    1448 out.go:352] Setting JSON to true
	I0829 11:05:03.881148    1448 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":267,"bootTime":1724954436,"procs":456,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:05:03.881217    1448 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:05:03.886332    1448 out.go:97] [download-only-318000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:05:03.886455    1448 notify.go:220] Checking for updates...
	I0829 11:05:03.890436    1448 out.go:169] MINIKUBE_LOCATION=19531
	I0829 11:05:03.893531    1448 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:05:03.897487    1448 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:05:03.900472    1448 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:05:03.903487    1448 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	W0829 11:05:03.909423    1448 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0829 11:05:03.909579    1448 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:05:03.912425    1448 out.go:97] Using the qemu2 driver based on user configuration
	I0829 11:05:03.912433    1448 start.go:297] selected driver: qemu2
	I0829 11:05:03.912438    1448 start.go:901] validating driver "qemu2" against <nil>
	I0829 11:05:03.912494    1448 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0829 11:05:03.913863    1448 out.go:169] Automatically selected the socket_vmnet network
	I0829 11:05:03.918684    1448 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0829 11:05:03.918773    1448 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0829 11:05:03.918814    1448 cni.go:84] Creating CNI manager for ""
	I0829 11:05:03.918823    1448 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0829 11:05:03.918830    1448 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0829 11:05:03.918869    1448 start.go:340] cluster config:
	{Name:download-only-318000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-318000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:05:03.922279    1448 iso.go:125] acquiring lock: {Name:mke1867919ff797f263eb38fb348ce00bba9d753 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0829 11:05:03.925492    1448 out.go:97] Starting "download-only-318000" primary control-plane node in "download-only-318000" cluster
	I0829 11:05:03.925500    1448 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:05:03.981646    1448 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	I0829 11:05:03.981657    1448 cache.go:56] Caching tarball of preloaded images
	I0829 11:05:03.981817    1448 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0829 11:05:03.985933    1448 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0829 11:05:03.985941    1448 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ...
	I0829 11:05:04.059471    1448 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/jenkins/minikube-integration/19531-965/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-318000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-318000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-318000
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.10s)

                                                
                                    
x
+
TestBinaryMirror (0.37s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-471000 --alsologtostderr --binary-mirror http://127.0.0.1:49311 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-471000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-471000
--- PASS: TestBinaryMirror (0.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-048000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-048000: exit status 85 (53.015166ms)

                                                
                                                
-- stdout --
	* Profile "addons-048000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-048000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-048000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-048000: exit status 85 (56.885125ms)

                                                
                                                
-- stdout --
	* Profile "addons-048000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-048000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (199.87s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-048000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-darwin-arm64 start -p addons-048000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns: (3m19.869368041s)
--- PASS: TestAddons/Setup (199.87s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 7.580708ms
addons_test.go:897: volcano-scheduler stabilized in 7.741042ms
addons_test.go:905: volcano-admission stabilized in 7.83975ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-x5b9b" [cc225d51-7d42-4b9a-b8f8-1bdf983b8585] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.011312125s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-zcl5d" [836f70f6-b113-4177-b980-99a8680a056f] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005010875s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-nffdf" [8fe39251-c4b3-4199-a6b8-d4c79822d1a4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.007930459s
addons_test.go:932: (dbg) Run:  kubectl --context addons-048000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-048000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-048000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [00483a80-8011-4592-8676-2fd5374d4735] Pending
helpers_test.go:344: "test-job-nginx-0" [00483a80-8011-4592-8676-2fd5374d4735] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [00483a80-8011-4592-8676-2fd5374d4735] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.004235083s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-arm64 -p addons-048000 addons disable volcano --alsologtostderr -v=1: (9.971312333s)
--- PASS: TestAddons/serial/Volcano (38.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-048000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-048000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-048000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-048000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-048000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [15267d08-9c88-4381-8d32-53fd2f55bf90] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [15267d08-9c88-4381-8d32-53fd2f55bf90] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007836s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-048000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.105.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-arm64 -p addons-048000 addons disable ingress --alsologtostderr -v=1: (7.345764958s)
--- PASS: TestAddons/parallel/Ingress (18.36s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bpz8p" [25848929-2908-4c3b-9a47-2adb27a094c3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005588083s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-048000
addons_test.go:851: (dbg) Done: out/minikube-darwin-arm64 addons disable inspektor-gadget -p addons-048000: (5.285297084s)
--- PASS: TestAddons/parallel/InspektorGadget (10.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.24975ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-b7srg" [13439dd5-2130-4f2a-aa96-82686d84633f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011408958s
addons_test.go:417: (dbg) Run:  kubectl --context addons-048000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (48.33s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 59.549791ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-048000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-048000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [3f39fb89-398e-452f-aaaa-11c1a4cd912e] Pending
helpers_test.go:344: "task-pv-pod" [3f39fb89-398e-452f-aaaa-11c1a4cd912e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [3f39fb89-398e-452f-aaaa-11c1a4cd912e] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.00989575s
addons_test.go:590: (dbg) Run:  kubectl --context addons-048000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-048000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-048000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-048000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-048000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-048000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-048000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d3ba83d9-53f3-4399-b695-a537ea82efad] Pending
helpers_test.go:344: "task-pv-pod-restore" [d3ba83d9-53f3-4399-b695-a537ea82efad] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d3ba83d9-53f3-4399-b695-a537ea82efad] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004053125s
addons_test.go:632: (dbg) Run:  kubectl --context addons-048000 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-048000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-048000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-arm64 -p addons-048000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.128255792s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (48.33s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-arm64 addons enable headlamp -p addons-048000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-49bfc" [30bfc2fb-4e71-45a3-b119-58d128e6a5f5] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-49bfc" [30bfc2fb-4e71-45a3-b119-58d128e6a5f5] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.008041583s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-arm64 -p addons-048000 addons disable headlamp --alsologtostderr -v=1: (5.257768666s)
--- PASS: TestAddons/parallel/Headlamp (16.60s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-2gqs5" [7581e80a-ae2a-4182-882b-49073802d018] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005508792s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-arm64 addons disable cloud-spanner -p addons-048000
--- PASS: TestAddons/parallel/CloudSpanner (5.19s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-048000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-048000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-048000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [05e1daf6-8145-4de3-ab54-5cde368bde94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [05e1daf6-8145-4de3-ab54-5cde368bde94] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [05e1daf6-8145-4de3-ab54-5cde368bde94] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004369167s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-048000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 ssh "cat /opt/local-path-provisioner/pvc-cd762ea6-df54-43ec-8e55-f5f3b0bc5b40_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-048000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-048000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-t7r5k" [2700f353-f50b-4859-8e8b-852b4f080fb8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00396925s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-arm64 addons disable nvidia-device-plugin -p addons-048000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.15s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fntc9" [d461e2b4-7dc0-4787-9d5f-a7fff18a5dcd] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.009662875s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-arm64 -p addons-048000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-arm64 -p addons-048000 addons disable yakd --alsologtostderr -v=1: (5.281483042s)
--- PASS: TestAddons/parallel/Yakd (10.29s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.4s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-arm64 stop -p addons-048000
addons_test.go:174: (dbg) Done: out/minikube-darwin-arm64 stop -p addons-048000: (12.211234s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-048000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-048000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-arm64 addons disable gvisor -p addons-048000
--- PASS: TestAddons/StoppedEnableDisable (12.40s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.52s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (10.52s)

                                                
                                    
x
+
TestErrorSpam/setup (36.08s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-808000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 --driver=qemu2 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -p nospam-808000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 --driver=qemu2 : (36.083099125s)
error_spam_test.go:91: acceptable stderr: "! /usr/local/bin/kubectl is version 1.29.2, which may have incompatibilities with Kubernetes 1.31.0."
--- PASS: TestErrorSpam/setup (36.08s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.24s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 status
--- PASS: TestErrorSpam/status (0.24s)

                                                
                                    
x
+
TestErrorSpam/pause (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 pause
--- PASS: TestErrorSpam/pause (0.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 unpause
--- PASS: TestErrorSpam/unpause (0.59s)

                                                
                                    
x
+
TestErrorSpam/stop (64.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 stop: (12.208222041s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 stop: (26.058339792s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-808000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-808000 stop: (26.03315325s)
--- PASS: TestErrorSpam/stop (64.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19531-965/.minikube/files/etc/test/nested/copy/1418/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.7s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-312000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Done: out/minikube-darwin-arm64 start -p functional-312000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : (49.701915875s)
--- PASS: TestFunctional/serial/StartWithProxy (49.70s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.98s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-312000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-darwin-arm64 start -p functional-312000 --alsologtostderr -v=8: (38.97592275s)
functional_test.go:663: soft start took 38.976338583s for "functional-312000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.98s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.03s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-312000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-312000 cache add registry.k8s.io/pause:3.1: (2.071047s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-312000 cache add registry.k8s.io/pause:3.3: (1.789809667s)
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-darwin-arm64 -p functional-312000 cache add registry.k8s.io/pause:latest: (1.311386875s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2401053645/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cache add minikube-local-cache-test:functional-312000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cache delete minikube-local-cache-test:functional-312000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-312000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (73.168458ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 kubectl -- --context functional-312000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.83s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-312000 get pods
functional_test.go:741: (dbg) Done: out/kubectl --context functional-312000 get pods: (1.027944709s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.03s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.33s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-312000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-darwin-arm64 start -p functional-312000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.332306625s)
functional_test.go:761: restart took 34.332409s for "functional-312000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (34.33s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-312000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.04s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 logs
--- PASS: TestFunctional/serial/LogsCmd (0.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3439054315/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-312000 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-darwin-arm64 service invalid-svc -p functional-312000
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-darwin-arm64 service invalid-svc -p functional-312000: exit status 115 (148.335167ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.105.4:31082 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-312000 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-312000 delete -f testdata/invalidsvc.yaml: (1.010419334s)
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 config get cpus: exit status 14 (29.352375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 config get cpus: exit status 14 (30.387833ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-312000 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-312000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2325: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-312000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-312000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (144.366792ms)

                                                
                                                
-- stdout --
	* [functional-312000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:24:00.227944    2308 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:24:00.228075    2308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:24:00.228079    2308 out.go:358] Setting ErrFile to fd 2...
	I0829 11:24:00.228081    2308 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:24:00.228200    2308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:24:00.229408    2308 out.go:352] Setting JSON to false
	I0829 11:24:00.247415    2308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1404,"bootTime":1724954436,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:24:00.247576    2308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:24:00.250883    2308 out.go:177] * [functional-312000] minikube v1.33.1 on Darwin 14.5 (arm64)
	I0829 11:24:00.257993    2308 notify.go:220] Checking for updates...
	I0829 11:24:00.261875    2308 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:24:00.271859    2308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:24:00.279292    2308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:24:00.288907    2308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:24:00.295908    2308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:24:00.307917    2308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:24:00.311185    2308 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:24:00.311456    2308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:24:00.314899    2308 out.go:177] * Using the qemu2 driver based on existing profile
	I0829 11:24:00.321911    2308 start.go:297] selected driver: qemu2
	I0829 11:24:00.321917    2308 start.go:901] validating driver "qemu2" against &{Name:functional-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:24:00.321964    2308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:24:00.327849    2308 out.go:201] 
	W0829 11:24:00.331827    2308 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0829 11:24:00.335924    2308 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-312000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-312000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-312000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (108.68125ms)

                                                
                                                
-- stdout --
	* [functional-312000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0829 11:24:00.508368    2319 out.go:345] Setting OutFile to fd 1 ...
	I0829 11:24:00.508488    2319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:24:00.508492    2319 out.go:358] Setting ErrFile to fd 2...
	I0829 11:24:00.508494    2319 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0829 11:24:00.508615    2319 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
	I0829 11:24:00.510053    2319 out.go:352] Setting JSON to false
	I0829 11:24:00.526829    2319 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":1404,"bootTime":1724954436,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0829 11:24:00.526917    2319 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0829 11:24:00.531897    2319 out.go:177] * [functional-312000] minikube v1.33.1 sur Darwin 14.5 (arm64)
	I0829 11:24:00.538956    2319 out.go:177]   - MINIKUBE_LOCATION=19531
	I0829 11:24:00.539012    2319 notify.go:220] Checking for updates...
	I0829 11:24:00.545900    2319 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	I0829 11:24:00.548916    2319 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0829 11:24:00.551933    2319 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0829 11:24:00.554863    2319 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	I0829 11:24:00.557967    2319 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0829 11:24:00.561172    2319 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0829 11:24:00.561416    2319 driver.go:392] Setting default libvirt URI to qemu:///system
	I0829 11:24:00.565909    2319 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0829 11:24:00.571804    2319 start.go:297] selected driver: qemu2
	I0829 11:24:00.571810    2319 start.go:901] validating driver "qemu2" against &{Name:functional-312000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19521/minikube-v1.33.1-1724775098-19521-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724775115-19521@sha256:5e61ebc6e68d69e31cadead040aa9b41aa36d281b29a7d562fa41077c99ed3ce Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.0 ClusterName:functional-312000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.105.4 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0829 11:24:00.571852    2319 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0829 11:24:00.577882    2319 out.go:201] 
	W0829 11:24:00.581976    2319 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0829 11:24:00.585903    2319 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 status
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0b815560-b36d-412c-b4fa-17d80a9ac18f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009650584s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-312000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-312000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-312000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-312000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4ab799f4-e2dd-4400-9c43-7f80f2533cc1] Pending
E0829 11:23:33.882783    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [4ab799f4-e2dd-4400-9c43-7f80f2533cc1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0829 11:23:35.166484    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:23:37.729951    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "sp-pod" [4ab799f4-e2dd-4400-9c43-7f80f2533cc1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.010045083s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-312000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-312000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-312000 delete -f testdata/storage-provisioner/pod.yaml: (1.069610458s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-312000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5213967b-8c32-40bb-8deb-86bf6db2c6f3] Pending
helpers_test.go:344: "sp-pod" [5213967b-8c32-40bb-8deb-86bf6db2c6f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5213967b-8c32-40bb-8deb-86bf6db2c6f3] Running
E0829 11:23:53.097652    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010262333s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-312000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.63s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh -n functional-312000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cp functional-312000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd1801531605/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh -n functional-312000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh -n functional-312000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1418/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo cat /etc/test/nested/copy/1418/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1418.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo cat /etc/ssl/certs/1418.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1418.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo cat /usr/share/ca-certificates/1418.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo cat /etc/ssl/certs/14182.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14182.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo cat /usr/share/ca-certificates/14182.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-312000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 ssh "sudo systemctl is-active crio": exit status 1 (89.219667ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-312000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-312000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kicbase/echo-server:functional-312000
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-312000 image ls --format short --alsologtostderr:
I0829 11:24:05.595849    2372 out.go:345] Setting OutFile to fd 1 ...
I0829 11:24:05.596024    2372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:05.596028    2372 out.go:358] Setting ErrFile to fd 2...
I0829 11:24:05.596030    2372 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:05.596163    2372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
I0829 11:24:05.596636    2372 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:05.596704    2372 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:05.597570    2372 ssh_runner.go:195] Run: systemctl --version
I0829 11:24:05.597579    2372 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
I0829 11:24:05.626908    2372 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-312000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | 2437cf7621777 | 57.4MB |
| docker.io/library/minikube-local-cache-test | functional-312000 | 917f5a9cad1fd | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.0           | 71d55d66fd4ee | 94.7MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| docker.io/library/nginx                     | alpine            | 70594c812316a | 47MB   |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/kube-controller-manager     | v1.31.0           | fcb0683e6bdbd | 85.9MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| localhost/my-image                          | functional-312000 | 13b1c6cb7c8af | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.31.0           | cd0f0ae0ec9e0 | 91.5MB |
| docker.io/kicbase/echo-server               | functional-312000 | ce2d2cda2d858 | 4.78MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
| docker.io/library/nginx                     | latest            | a9dfdba8b7190 | 193MB  |
| registry.k8s.io/kube-scheduler              | v1.31.0           | fbbbd428abb4d | 66MB   |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-312000 image ls --format table --alsologtostderr:
I0829 11:24:08.645624    2385 out.go:345] Setting OutFile to fd 1 ...
I0829 11:24:08.645794    2385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:08.645797    2385 out.go:358] Setting ErrFile to fd 2...
I0829 11:24:08.645800    2385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:08.645922    2385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
I0829 11:24:08.646364    2385 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:08.646425    2385 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:08.647307    2385 ssh_runner.go:195] Run: systemctl --version
I0829 11:24:08.647315    2385 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
I0829 11:24:08.676247    2385 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
2024/08/29 11:24:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-312000 image ls --format json --alsologtostderr:
[{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"94700000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-312000"],"size":"4780000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDige
sts":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"13b1c6cb7c8af1ba43c267dfce5962401a04a0234b5ede677a286b5019fcdc1a","repoDigests":[],"repoTags":["localhost/my-image:functional-312000"],"size":"1410000"},{"id":"a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"91500000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1
.28.4-glibc"],"size":"3550000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"57400000"},{"id":"917f5a9cad1fd80ad7f6426838b28d494ca5bb128a3d6035aea2ac0be1be4d90","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-312000"],"size":"30"},{"id":"fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"85900000"},{"id":"fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"66000000"}
]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-312000 image ls --format json --alsologtostderr:
I0829 11:24:08.572925    2383 out.go:345] Setting OutFile to fd 1 ...
I0829 11:24:08.573068    2383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:08.573076    2383 out.go:358] Setting ErrFile to fd 2...
I0829 11:24:08.573078    2383 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:08.573196    2383 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
I0829 11:24:08.573609    2383 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:08.573667    2383 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:08.574513    2383 ssh_runner.go:195] Run: systemctl --version
I0829 11:24:08.574522    2383 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
I0829 11:24:08.603916    2383 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-312000 image ls --format yaml --alsologtostderr:
- id: a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: 2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "57400000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "91500000"
- id: fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "66000000"
- id: fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "85900000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 917f5a9cad1fd80ad7f6426838b28d494ca5bb128a3d6035aea2ac0be1be4d90
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-312000
size: "30"
- id: 70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-312000
size: "4780000"
- id: 71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "94700000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-312000 image ls --format yaml --alsologtostderr:
I0829 11:24:05.669114    2374 out.go:345] Setting OutFile to fd 1 ...
I0829 11:24:05.669300    2374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:05.669304    2374 out.go:358] Setting ErrFile to fd 2...
I0829 11:24:05.669307    2374 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:05.669436    2374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
I0829 11:24:05.669900    2374 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:05.669964    2374 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:05.670835    2374 ssh_runner.go:195] Run: systemctl --version
I0829 11:24:05.670844    2374 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
I0829 11:24:05.704088    2374 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 ssh pgrep buildkitd: exit status 1 (63.752542ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image build -t localhost/my-image:functional-312000 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-darwin-arm64 -p functional-312000 image build -t localhost/my-image:functional-312000 testdata/build --alsologtostderr: (2.681389709s)
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-312000 image build -t localhost/my-image:functional-312000 testdata/build --alsologtostderr:
I0829 11:24:05.813455    2378 out.go:345] Setting OutFile to fd 1 ...
I0829 11:24:05.813676    2378 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:05.813679    2378 out.go:358] Setting ErrFile to fd 2...
I0829 11:24:05.813681    2378 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0829 11:24:05.813798    2378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19531-965/.minikube/bin
I0829 11:24:05.814225    2378 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:05.815184    2378 config.go:182] Loaded profile config "functional-312000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0829 11:24:05.816008    2378 ssh_runner.go:195] Run: systemctl --version
I0829 11:24:05.816015    2378 sshutil.go:53] new ssh client: &{IP:192.168.105.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19531-965/.minikube/machines/functional-312000/id_rsa Username:docker}
I0829 11:24:05.845774    2378 build_images.go:161] Building image from path: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2232430062.tar
I0829 11:24:05.845842    2378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0829 11:24:05.849396    2378 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2232430062.tar
I0829 11:24:05.850873    2378 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2232430062.tar: stat -c "%s %y" /var/lib/minikube/build/build.2232430062.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2232430062.tar': No such file or directory
I0829 11:24:05.850890    2378 ssh_runner.go:362] scp /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2232430062.tar --> /var/lib/minikube/build/build.2232430062.tar (3072 bytes)
I0829 11:24:05.859967    2378 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2232430062
I0829 11:24:05.864293    2378 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2232430062 -xf /var/lib/minikube/build/build.2232430062.tar
I0829 11:24:05.867681    2378 docker.go:360] Building image: /var/lib/minikube/build/build.2232430062
I0829 11:24:05.867738    2378 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-312000 /var/lib/minikube/build/build.2232430062
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.8s
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.8s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:13b1c6cb7c8af1ba43c267dfce5962401a04a0234b5ede677a286b5019fcdc1a done
#8 naming to localhost/my-image:functional-312000 done
#8 DONE 0.0s
I0829 11:24:08.449843    2378 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-312000 /var/lib/minikube/build/build.2232430062: (2.582116s)
I0829 11:24:08.449917    2378 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2232430062
I0829 11:24:08.453736    2378 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2232430062.tar
I0829 11:24:08.457175    2378 build_images.go:217] Built localhost/my-image:functional-312000 from /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/build.2232430062.tar
I0829 11:24:08.457190    2378 build_images.go:133] succeeded building to: functional-312000
I0829 11:24:08.457192    2378 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.818657625s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-312000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-312000 docker-env) && out/minikube-darwin-arm64 status -p functional-312000"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-312000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-312000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-312000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-9zv77" [cd27663d-a0ab-41d3-87b9-2100b179a622] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-9zv77" [cd27663d-a0ab-41d3-87b9-2100b179a622] Running / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.012099625s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image load --daemon kicbase/echo-server:functional-312000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image load --daemon kicbase/echo-server:functional-312000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-312000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image load --daemon kicbase/echo-server:functional-312000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image save kicbase/echo-server:functional-312000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image rm kicbase/echo-server:functional-312000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-312000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 image save --daemon kicbase/echo-server:functional-312000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-312000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-312000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-312000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-312000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2197: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-312000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-312000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-312000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e0721cd5-fbff-41d9-8c23-22be2a971d22] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e0721cd5-fbff-41d9-8c23-22be2a971d22] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.008550791s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 service list -o json
functional_test.go:1494: Took "86.976375ms" to run "out/minikube-darwin-arm64 -p functional-312000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.105.4:30090
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.105.4:30090
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-312000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.160.208 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-312000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "89.381875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "35.840375ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "88.077292ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "34.612334ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1417900404/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724955835901787000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1417900404/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724955835901787000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1417900404/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724955835901787000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1417900404/001/test-1724955835901787000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (64.67425ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 29 18:23 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 29 18:23 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 29 18:23 test-1724955835901787000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh cat /mount-9p/test-1724955835901787000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-312000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [23abc6e1-92eb-40b1-9e2c-44bca706cc5f] Pending
helpers_test.go:344: "busybox-mount" [23abc6e1-92eb-40b1-9e2c-44bca706cc5f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [23abc6e1-92eb-40b1-9e2c-44bca706cc5f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [23abc6e1-92eb-40b1-9e2c-44bca706cc5f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003710542s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-312000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1417900404/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2520481796/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (65.557833ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2520481796/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 ssh "sudo umount -f /mount-9p": exit status 1 (70.099333ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-312000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2520481796/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3653839768/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3653839768/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3653839768/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T" /mount1: exit status 1 (70.098333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-312000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-arm64 mount -p functional-312000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3653839768/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3653839768/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-312000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3653839768/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.93s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-312000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-312000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-312000
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-692000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
E0829 11:24:13.581579    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:24:54.545124    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:26:16.466579    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-darwin-arm64 start -p ha-692000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : (3m18.726678542s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-arm64 kubectl -p ha-692000 -- rollout status deployment/busybox: (3.818979417s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-fktxf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-rsn2x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-xmkwm -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-fktxf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-rsn2x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-xmkwm -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-fktxf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-rsn2x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-xmkwm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-fktxf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-fktxf -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-rsn2x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-rsn2x -- sh -c "ping -c 1 192.168.105.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-xmkwm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-692000 -- exec busybox-7dff88458-xmkwm -- sh -c "ping -c 1 192.168.105.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (87.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-692000 -v=7 --alsologtostderr
E0829 11:28:17.101764    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:17.108845    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:17.122322    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:17.145784    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:17.189218    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:17.272594    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:17.435975    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:17.759381    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:18.402769    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:19.686161    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:22.249619    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:27.373103    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:32.580165    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:37.616519    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:28:58.099773    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:29:00.308625    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-darwin-arm64 node add -p ha-692000 -v=7 --alsologtostderr: (1m27.354475958s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (87.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-692000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (4.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp testdata/cp-test.txt ha-692000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2478805073/001/cp-test_ha-692000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000:/home/docker/cp-test.txt ha-692000-m02:/home/docker/cp-test_ha-692000_ha-692000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test_ha-692000_ha-692000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000:/home/docker/cp-test.txt ha-692000-m03:/home/docker/cp-test_ha-692000_ha-692000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test_ha-692000_ha-692000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000:/home/docker/cp-test.txt ha-692000-m04:/home/docker/cp-test_ha-692000_ha-692000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test_ha-692000_ha-692000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp testdata/cp-test.txt ha-692000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m02:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2478805073/001/cp-test_ha-692000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m02:/home/docker/cp-test.txt ha-692000:/home/docker/cp-test_ha-692000-m02_ha-692000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test_ha-692000-m02_ha-692000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m02:/home/docker/cp-test.txt ha-692000-m03:/home/docker/cp-test_ha-692000-m02_ha-692000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test_ha-692000-m02_ha-692000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m02:/home/docker/cp-test.txt ha-692000-m04:/home/docker/cp-test_ha-692000-m02_ha-692000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test_ha-692000-m02_ha-692000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp testdata/cp-test.txt ha-692000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m03:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2478805073/001/cp-test_ha-692000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m03:/home/docker/cp-test.txt ha-692000:/home/docker/cp-test_ha-692000-m03_ha-692000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test_ha-692000-m03_ha-692000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m03:/home/docker/cp-test.txt ha-692000-m02:/home/docker/cp-test_ha-692000-m03_ha-692000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test_ha-692000-m03_ha-692000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m03:/home/docker/cp-test.txt ha-692000-m04:/home/docker/cp-test_ha-692000-m03_ha-692000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test_ha-692000-m03_ha-692000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp testdata/cp-test.txt ha-692000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m04:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestMultiControlPlaneserialCopyFile2478805073/001/cp-test_ha-692000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m04:/home/docker/cp-test.txt ha-692000:/home/docker/cp-test_ha-692000-m04_ha-692000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000 "sudo cat /home/docker/cp-test_ha-692000-m04_ha-692000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m04:/home/docker/cp-test.txt ha-692000-m02:/home/docker/cp-test_ha-692000-m04_ha-692000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m02 "sudo cat /home/docker/cp-test_ha-692000-m04_ha-692000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 cp ha-692000-m04:/home/docker/cp-test.txt ha-692000-m03:/home/docker/cp-test_ha-692000-m04_ha-692000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p ha-692000 ssh -n ha-692000-m03 "sudo cat /home/docker/cp-test_ha-692000-m04_ha-692000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (4.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
E0829 11:38:17.076161    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/functional-312000/client.crt: no such file or directory" logger="UnhandledError"
E0829 11:38:32.555218    1418 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /Users/jenkins/minikube-integration/19531-965/.minikube/profiles/addons-048000/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-darwin-arm64 profile list --output json: (1m20.042002833s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (80.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-526000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-526000 --output=json --user=testUser: (3.593916875s)
--- PASS: TestJSONOutput/stop/Command (3.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-705000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-705000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (94.798709ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dcec0fb4-ffd7-4b3e-9b49-c1a4dfd8d0eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-705000] minikube v1.33.1 on Darwin 14.5 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb2ed666-4e4b-4c49-b7e7-c0e88a52a702","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19531"}}
	{"specversion":"1.0","id":"c6a15cfc-8f36-404f-b44a-5ac39e21d03b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig"}}
	{"specversion":"1.0","id":"f8df230c-751e-4b89-9287-e56f20b419c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"530446e9-8db7-40da-b26c-fcdef8d393a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"af6f748c-0dd3-4d70-b5cf-48f696085a1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube"}}
	{"specversion":"1.0","id":"c309681a-f701-4a24-af10-e715a1955cfb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ef36b302-d5d8-4729-88aa-4827c34b8d91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-705000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-705000
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.03s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.03s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-585000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-185000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (99.874958ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-185000] minikube v1.33.1 on Darwin 14.5 (arm64)
	  - MINIKUBE_LOCATION=19531
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19531-965/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-185000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-185000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (46.5635ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-185000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-185000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-185000
* minikube v1.33.1 on darwin (arm64)
- MINIKUBE_LOCATION=19531
- KUBECONFIG=/Users/jenkins/minikube-integration/19531-965/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current793375235/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-185000: (3.583825792s)
--- PASS: TestNoKubernetes/serial/Stop (3.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-185000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-185000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (41.114708ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-185000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-185000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-225000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-225000 --alsologtostderr -v=3: (3.43763825s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-225000 -n old-k8s-version-225000: exit status 7 (58.337333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-225000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-622000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-622000 --alsologtostderr -v=3: (2.930525375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (2.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-622000 -n no-preload-622000: exit status 7 (57.330583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-622000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-142000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-142000 --alsologtostderr -v=3: (2.893175833s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-142000 -n embed-certs-142000: exit status 7 (58.530417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-142000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-502000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-502000 --alsologtostderr -v=3: (1.766339583s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-502000 -n default-k8s-diff-port-502000: exit status 7 (58.697542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-502000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-182000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-182000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-182000 --alsologtostderr -v=3: (3.216022375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-182000 -n newest-cni-182000: exit status 7 (57.385083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-182000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (21/274)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-015000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-015000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-015000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-015000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-015000"

                                                
                                                
----------------------- debugLogs end: cilium-015000 [took: 2.171806875s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-015000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-015000
--- SKIP: TestNetworkPlugins/group/cilium (2.28s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-620000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-620000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.11s)

                                                
                                    
Copied to clipboard